795 resultados para Inverse Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt"

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The α(1b)-adrenergic receptor (AR) was, after rhodopsin, the first G protein-coupled receptor (GPCR) in which point mutations were shown to trigger constitutive (agonist-independent) activity. Constitutively activating mutations have been found in other AR subtypes as well as in several GPCRs. This chapter briefly summarizes the main findings on constitutively active mutants of the α(1a)- and α(1b)-AR subtypes and the methods used to predict activating mutations, to measure constitutive activity of Gq-coupled receptors and to investigate inverse agonism. In addition, it highlights the implications of studies on constitutively active AR mutants on elucidating the molecular mechanisms of receptor activation and drug action.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'article expose un comparatisme reposant sur la pratique des regards croisés dont nous postulons la pertinence pour comprendre l'échange d'idées et de pratiques entre l'Inde et l'«Occident». Durant les XIXe et XXe siècles naît l'un des phénomènes de mondialisation les plus intéressants, à savoir l'exportation et la globalisation du yoga indien. Pendant que l'«Occident» s'éveille au yoga spirituel inauguré par Vivekananda (dès 1893), des yogis indiens soumettent leur tradition à l'expérimentation scientifique moderne (années 20). Dans leur ouvrage commun Sport et Yoga (1941/48), Selvarajan Yesudian (le yogi chrétien indien) et Elisabeth Haich (l'ésotériste hongroise) illustrent d'une manière paradigmatique les synthèses créatives qui peuvent s'opérer dans les processus d'échanges pluri-dimensionels et pluri-directionnels entre les traditions indiennes et européennes que seule une posture comparative, capable de faire le va-et-vient entre les deux traditions, est à même de saisir.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffusion tensor magnetic resonance imaging, which measures directional information of water diffusion in the brain, has emerged as a powerful tool for human brain studies. In this paper, we introduce a new Monte Carlo-based fiber tracking approach to estimate brain connectivity. One of the main characteristics of this approach is that all parameters of the algorithm are automatically determined at each point using the entropy of the eigenvalues of the diffusion tensor. Experimental results show the good performance of the proposed approach

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oxidative modification of LDL is thought to play an important role in the development of atherosclerosis. Susceptibility of LDL to peroxidation may partly depend on the compositional characteristics of the antioxidant and fatty acid content. The aim of this study was to examine the association between levels of antibodies to oxidized LDL and the various serum fatty acids in women. A total of 465 women aged 18-65 years were selected randomly from the adult population census of Pizarra, a town in southern Spain. Measurement of anti-oxidized-LDL was done by ELISA and the fatty acid composition of serum phospholipids was determined by GC. The levels of anti-oxidized-LDL antibodies were significantly related with age (r - 0.341, P < 0.001), BMI (r - 0.239, P < 0.001), waist:hip ratio (r - 0.285, P < 0.001), glucose (r - 0.208, P < 0.001), cholesterol (r - 0.243, P < 0.001), LDL-cholesterol (r - 0.185, P = 0.002), EPA (r - 0.159, P = 0.003), DHA (r - 0.121, P = 0.026), and the sum of the serum phospholipid n-3 PUFA (r - 0.141, P = 0.009). Multiple regression analysis showed that the variables that explained the behaviour of the levels of anti-oxidized-LDL antibodies were age (P < 0.001) and the serum phospholipid EPA (P < 0.001). This study showed that the fatty acid composition of serum phospholipids, and especially the percentage of EPA, was inversely related with the levels of anti-oxidized-LDL antibodies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Adipose tissue is a key regulator of energy balance playing an active role in lipid storage and may be a dynamic buffer to control fatty acid flux. Just like PPARgamma, fatty acid synthesis enzymes such as FASN have been implicated in almost all aspects of human metabolic alterations such as obesity, insulin resistance or dyslipemia. The aim of this work is to investigate how FASN and PPARgamma expression in human adipose tissue is related to carbohydrate metabolism dysfunction and obesity. METHODS The study included eighty-seven patients which were classified according to their BMI and to their glycaemia levels in order to study FASN and PPARgamma gene expression levels, anthropometric and biochemical variables. RESULTS The main result of this work is the close relation between FASN expression level and the factors that lead to hyperglycemic state (increased values of glucose levels, HOMA-IR, HbA1c, BMI and triglycerides). The correlation of the enzyme with these parameters is inversely proportional. On the other hand, PPARgamma is not related to carbohydrate metabolism. CONCLUSIONS We can demonstrate that FASN expression is a good candidate to study the pathophysiology of type II diabetes and obesity in humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The k-symplectic formulation of field theories is especially simple, since only tangent and cotangent bundles are needed in its description. Its defining elements show a close relationship with those in the symplectic formulation of mechanics. It will be shown that this relationship also stands in the presymplectic case. In a natural way,one can mimick the presymplectic constraint algorithm to obtain a constraint algorithmthat can be applied to k-presymplectic field theory, and more particularly to the Lagrangian and Hamiltonian formulations offield theories defined by a singular Lagrangian, as well as to the unified Lagrangian-Hamiltonian formalism (Skinner--Rusk formalism) for k-presymplectic field theory. Two examples of application of the algorithm are also analyzed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of exendin-(9-39), a described antagonist of the glucagon-like peptide-1 (GLP-1) receptor, was evaluated on the formation of cAMP- and glucose-stimulated insulin secretion (GSIS) by the conditionally immortalized murine betaTC-Tet cells. These cells have a basal intracellular cAMP level that can be increased by GLP-1 with an EC50 of approximately 1 nM and can be decreased dose dependently by exendin-(9-39). This latter effect was receptor dependent, as a beta-cell line not expressing the GLP-1 receptor was not affected by exendin-(9-39). It was also not due to the endogenous production of GLP-1, because this effect was observed in the absence of detectable preproglucagon messenger RNA levels and radioimmunoassayable GLP-1. Importantly, GSIS was shown to be sensitive to this basal level of cAMP, as perifusion of betaTC-Tet cells in the presence of exendin-(9-39) strongly reduced insulin secretion. This reduction of GSIS, however, was observed only with growth-arrested, not proliferating, betaTC-Tet cells; it was also seen with nontransformed mouse beta-cells perifused in similar conditions. These data therefore demonstrated that 1) exendin-(9-39) is an inverse agonist of the murine GLP-1 receptor; 2) the decreased basal cAMP levels induced by this peptide inhibit the secretory response of betaTC-Tet cells and mouse pancreatic islets to glucose; 3) as this effect was observed only with growth-arrested cells, this indicates that the mechanism by which cAMP leads to potentiation of insulin secretion is different in proliferating and growth-arrested cells; and 4) the presence of the GLP-1 receptor, even in the absence of bound peptide, is important for maintaining elevated intracellular cAMP levels and, therefore, the glucose competence of the beta-cells.