975 resultados para Dynamique inverse
Resumo:
A nonlinear viscoelastic image registration algorithm based on the demons paradigm and incorporating inverse consistent constraint (ICC) is implemented. An inverse consistent and symmetric cost function using mutual information (MI) as a similarity measure is employed. The cost function also includes regularization of transformation and inverse consistent error (ICE). The uncertainties in balancing various terms in the cost function are avoided by alternatively minimizing the similarity measure, the regularization of the transformation, and the ICE terms. The diffeomorphism of registration for preventing folding and/or tearing in the deformation is achieved by the composition scheme. The quality of image registration is first demonstrated by constructing brain atlas from 20 adult brains (age range 30-60). It is shown that with this registration technique: (1) the Jacobian determinant is positive for all voxels and (2) the average ICE is around 0.004 voxels with a maximum value below 0.1 voxels. Further, the deformation-based segmentation on Internet Brain Segmentation Repository, a publicly available dataset, has yielded high Dice similarity index (DSI) of 94.7% for the cerebellum and 74.7% for the hippocampus, attesting to the quality of our registration method.
Resumo:
The decomposition of soil organic matter (SOM) is temperature dependent, but its response to a future warmer climate remains equivocal. Enhanced rates of decomposition of SOM under increased global temperatures might cause higher CO2 emissions to the atmosphere, and could therefore constitute a strong positive feedback. The magnitude of this feedback however remains poorly understood, primarily because of the difficulty in quantifying the temperature sensitivity of stored, recalcitrant carbon that comprises the bulk (>90%) of SOM in most soils. In this study we investigated the effects of climatic conditions on soil carbon dynamics using the attenuation of the 14C ‘bomb’ pulse as recorded in selected modern European speleothems. These new data were combined with published results to further examine soil carbon dynamics, and to explore the sensitivity of labile and recalcitrant organic matter decomposition to different climatic conditions. Temporal changes in 14C activity inferred from each speleothem was modelled using a three pool soil carbon inverse model (applying a Monte Carlo method) to constrain soil carbon turnover rates at each site. Speleothems from sites that are characterised by semi-arid conditions, sparse vegetation, thin soil cover and high mean annual air temperatures (MAATs), exhibit weak attenuation of atmospheric 14C ‘bomb’ peak (a low damping effect, D in the range: 55–77%) and low modelled mean respired carbon ages (MRCA), indicating that decomposition is dominated by young, recently fixed soil carbon. By contrast, humid and high MAAT sites that are characterised by a thick soil cover and dense, well developed vegetation, display the highest damping effect (D = c. 90%), and the highest MRCA values (in the range from 350 ± 126 years to 571 ± 128 years). This suggests that carbon incorporated into these stalagmites originates predominantly from decomposition of old, recalcitrant organic matter. SOM turnover rates cannot be ascribed to a single climate variable, e.g. (MAAT) but instead reflect a complex interplay of climate (e.g. MAAT and moisture budget) and vegetation development.
Resumo:
Inverse fusion PCR cloning (IFPC) is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.
Resumo:
The production of electron–positron pairs in time-dependent electric fields (Schwinger mechanism) depends non-linearly on the applied field profile. Accordingly, the resulting momentum spectrum is extremely sensitive to small variations of the field parameters. Owing to this non-linear dependence it is so far unpredictable how to choose a field configuration such that a predetermined momentum distribution is generated. We show that quantum kinetic theory along with optimal control theory can be used to approximately solve this inverse problem for Schwinger pair production. We exemplify this by studying the superposition of a small number of harmonic components resulting in predetermined signatures in the asymptotic momentum spectrum. In the long run, our results could facilitate the observation of this yet unobserved pair production mechanism in quantum electrodynamics by providing suggestions for tailored field configurations.
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
A large scale Chinese agricultural survey was conducted at the direction of John Lossing Buck from 1929 through 1933. At the end of the 1990’s, some parts of the original micro data of Buck’s survey were discovered at Nanjing Agricultural University. An international joint study was begun to restore micro data of Buck’s survey and construct parts of the micro database on both the crop yield survey and special expenditure survey. This paper includes a summary of the characteristics of farmlands and cropping patterns in crop yield micro data that covered 2,102 farmers in 20 counties of 9 provinces. In order to test the classical hypothesis of whether or not an inverse relationship between land productivity and cultivated area may be observed in developing countries, a Box-Cox transformation test was conducted for functional forms on five main crops of Buck’s crop yield survey. The result of the test shows that the relationship between land productivity and cultivated areas of wheat and barley is linear and somewhat negative; those of rice, rapeseed, and seed cotton appear to be slightly positive. It can be tentatively concluded that the relationship between cultivated area and land productivity are not the same among crops, and the difference of labor intensity and the level of commercialization of each crop may be strongly related to the existence or non-existence of inverse relationships.
Resumo:
The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.
Resumo:
La leçon s'addresse à la comprehension du comportement des bâtiments soumis à l'accéleration séismique, et présente une introduction au comportement dynamique de oscillateurs (un ou plusieurs dégrés de liberté), du comportément hystérétique des structures (selon modes de dissipation) et aux paramètres séismiques relevants à la conception parasismique, notamment aux spectres de réponse et de démande, et sa relation avec la capacité de la structure (courbe de capacité) où on peut identifier les niveaux de dommage -ou les critères de performance- pour des intensités séismique prévues au projet. Elle considère aussi les méthodes de définition et détermination de la vulnérabilité, façe aux séismes, des différentes typologies constructives, avec l'inclusion finale des typologies pour les sistèmes de contreventement et recomandations visées à éviter aux mêmes la concentration de dommage d'origine séismique. Lecture's goal focuses in the understanding of the behaviour of buildings under seismic excitation. It presents an introduction of dynamics (single or multiple degrees of freedom oscillators) and the hysteretic behaviour of ductile structures, introducing the seismic parameters relevant to the structural design, mostly in the context of response and demand spectra and their relations with capacity curves of structures. On the capacity curve obtained in pushover analysis, points representing the design objectives in terms of performance levels can be identified and related with seismic demand. Lecture deals also with methods on vulnerability analysis for building construction typologies and the behaviour (and related recommendations) of seismic resistant structural typologies, having the distribution of dissipative energy and damage in mind.
Resumo:
The solubility parameters of two SBS commercial rubbers with different structures (lineal and radial), and with slightly different styrene content have been determined by inverse gas chromatography technique. The Flory–Huggins interaction parameters of several polymer–solvent mixtures have also been calculated. The influence of the polymer composition, the solvent molecular weight and the temperature over these parameters have been discussed; besides, these parameters have been compared with previous ones, obtained by intrinsic viscosity measurements. From the Flory–Huggins interaction parameters, the infinite dilution activity coefficients of the solvents have been calculated and fitted to the well-known NRTL model. These NRTL binary interaction parameters have a great importance in modelling the separation steps in the process of obtaining the rubber.
Resumo:
Objective: This research is focused in the creation and validation of a solution to the inverse kinematics problem for a 6 degrees of freedom human upper limb. This system is intended to work within a realtime dysfunctional motion prediction system that allows anticipatory actuation in physical Neurorehabilitation under the assisted-as-needed paradigm. For this purpose, a multilayer perceptron-based and an ANFIS-based solution to the inverse kinematics problem are evaluated. Materials and methods: Both the multilayer perceptron-based and the ANFIS-based inverse kinematics methods have been trained with three-dimensional Cartesian positions corresponding to the end-effector of healthy human upper limbs that execute two different activities of the daily life: "serving water from a jar" and "picking up a bottle". Validation of the proposed methodologies has been performed by a 10 fold cross-validation procedure. Results: Once trained, the systems are able to map 3D positions of the end-effector to the corresponding healthy biomechanical configurations. A high mean correlation coefficient and a low root mean squared error have been found for both the multilayer perceptron and ANFIS-based methods. Conclusions: The obtained results indicate that both systems effectively solve the inverse kinematics problem, but, due to its low computational load, crucial in real-time applications, along with its high performance, a multilayer perceptron-based solution, consisting in 3 input neurons, 1 hidden layer with 3 neurons and 6 output neurons has been considered the most appropriated for the target application.
Resumo:
Sequential estimation of the success probability p in inverse binomial sampling is considered in this paper. For any estimator pˆ , its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters a and b for pˆ
p , respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as p→0, and which guarantee that, for any p∈(0,1), the risk is lower than its asymptotic value. This allows selecting the required number of successes, r, to meet a prescribed quality irrespective of the unknown p. In addition, the proposed estimators are shown to be approximately minimax when a/b does not deviate too much from 1, and asymptotically minimax as r→∞ when a=b.
Resumo:
Inverse bremsstrahlung has been incorporated into an analytical model of the expanding corona of a laser-irradiated spherical target. Absorption decreases slowly with increasing intensity, in agreement with some numerical simulations, and contrary to estimates from simple models in use up to now, which are optimistic at low values of intensity and very pessimistic at high values. Present results agree well with experimental data from many laboratories; substantial absorption is found up to moderate intensities,say below IOl5 W cm-2 for 1.06 pm light. Anomalous absorption, wher, included in the analysis, leaves practically unaffected the ablation pressure and mass ablation rate, for given absorbed intensity. Universal results are given in dimensionless fom.
Resumo:
Sequential estimation of the success probability $p$ in inverse binomial sampling is considered in this paper. For any estimator $\hatvap$, its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters $a$ and $b$ for $\hatvap < p$ and $\hatvap > p$ respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as $p \rightarrow 0$, and which guarantee that, for any $p \in (0,1)$, the risk is lower than its asymptotic value. This allows selecting the required number of successes, $\nnum$, to meet a prescribed quality irrespective of the unknown $p$. In addition, the proposed estimators are shown to be approximately minimax when $a/b$ does not deviate too much from $1$, and asymptotically minimax as $\nnum \rightarrow \infty$ when $a=b$.
Resumo:
There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.