61 resultados para Yosida Approximate


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results of Ar-40/Ar-39 dating integrated with calcareous plankton biostratigraphical data performed on two volcaniclastic layers (VLs) interbedded in Burdigalian to Lower Langhian outer shelf carbonate sediments cropping out in Monferrato (NW Italy) are presented. The investigated VLs, named Villadeati and Varengo, are thick sedimentary bodies with scarce lateral continuity. They are composed of prevalent volcanogenic material (about 87 up to 90% by volume) consisting of glass shards and volcanic phenocrysts (plagioclase, biotite, quartz, amphibole, sanidine and magnetite) and minor extrabasinal and intrabasinal components. On the basis of their composition and sedimentological features, the VLs have been interpreted as distal shelf turbidites deposited below storm wave base. However, compositional characteristics evidence the rapid resedimentation of the volcanic detritus after its primary deposition and hence the VL sediments can be considered penecontemporaneous to the encasing deposits. Biostratigraphical analyses were carried out on the basis of a quantitative study of calcareous nannofossil and planktonic foraminifer associations, whilst Ar-40/Ar-39 dating were performed on biotite at Villadeati and on homeblende at Varengo. The data resulting from the Villadeati section have permitted to estimate an age of 18.7 +/- 0.1 Ma for the last common occurrence (LCO) of Sphenolithus belemnos whereas those from Varengo allowed to extrapolate an age of 16.4 Ma +/-0.1 Ma for the first occurrence (FO) of Praeorbulina sicana. This latter biovent is commonly used to approximate the base of the Langhian stage, that corresponds to the Early-Middle Miocene boundary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To evaluate the combination of ultrasound (US) + fine-needle aspiration (FNA) in the assessment of salivary gland tumours in the hands of the otolaryngologist. DESIGN: A retrospective review of case notes was performed. SETTING: Two university teaching hospitals in Switzerland. PARTICIPANTS: One hundred and three patients with a total of 106 focal masses of the salivary glands were included. Clinician-operated US + FNA were the first line of investigation for these lesions. All patients underwent surgical excision of the lesion, which allowed for confirmation of diagnosis by histopathology in 104 lesions and by laboratory testing in two lesions. MAIN OUTCOME MEASURES: Primary--diagnostic accuracy in identifying true salivary gland neoplasms and detecting malignancy. Secondary--predicting an approximate and specific diagnosis in these tumours. RESULTS: The combination of US + FNA achieved a diagnostic accuracy of 99% in identifying and differentiating true salivary gland neoplasms from tumour-like lesions. In detecting malignancy, this combination permitted an accuracy of 98%. An approximate diagnosis was possible in 89%, and a specific diagnosis in 69% of our patients. CONCLUSIONS: Due to economic factors and a high diagnostic accuracy, the combination of US + FNA represents the investigation method of choice for most salivary gland tumours. We suggest that the otolaryngologist be employed in carrying out these procedures, as is already the rule in other medical specialties, while computed tomography and magnetic resonance imaging should be reserved to those few lesions, which cannot be delineated completely by sonography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When individuals learn by trial-and-error, they perform randomly chosen actions and then reinforce those actions that led to a high payoff. However, individuals do not always have to physically perform an action in order to evaluate its consequences. Rather, they may be able to mentally simulate actions and their consequences without actually performing them. Such fictitious learners can select actions with high payoffs without making long chains of trial-and-error learning. Here, we analyze the evolution of an n-dimensional cultural trait (or artifact) by learning, in a payoff landscape with a single optimum. We derive the stochastic learning dynamics of the distance to the optimum in trait space when choice between alternative artifacts follows the standard logit choice rule. We show that for both trial-and-error and fictitious learners, the learning dynamics stabilize at an approximate distance of root n/(2 lambda(e)) away from the optimum, where lambda(e) is an effective learning performance parameter depending on the learning rule under scrutiny. Individual learners are thus unlikely to reach the optimum when traits are complex (n large), and so face a barrier to further improvement of the artifact. We show, however, that this barrier can be significantly reduced in a large population of learners performing payoff-biased social learning, in which case lambda(e) becomes proportional to population size. Overall, our results illustrate the effects of errors in learning, levels of cognition, and population size for the evolution of complex cultural traits. (C) 2013 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impairment of lung liquid absorption can lead to severe respiratory symptoms, such as those observed in pulmonary oedema. In the adult lung, liquid absorption is driven by cation transport through two pathways: a well-established amiloride-sensitive Na(+) channel (ENaC) and, more controversially, an amiloride-insensitive channel that may belong to the cyclic nucleotide-gated (CNG) channel family. Here, we show robust CNGA1 (but not CNGA2 or CNGA3) channel expression principally in rat alveolar type I cells; CNGA3 was expressed in ciliated airway epithelial cells. Using a rat in situ lung liquid clearance assay, CNG channel activation with 1 mM 8Br-cGMP resulted in an approximate 1.8-fold stimulation of lung liquid absorption. There was no stimulation by 8Br-cGMP when applied in the presence of either 100 μM L: -cis-diltiazem or 100 nM pseudechetoxin (PsTx), a specific inhibitor of CNGA1 channels. Channel specificity of PsTx and amiloride was confirmed by patch clamp experiments showing that CNGA1 channels in HEK 293 cells were not inhibited by 100 μM amiloride and that recombinant αβγ-ENaC were not inhibited by 100 nM PsTx. Importantly, 8Br-cGMP stimulated lung liquid absorption in situ, even in the presence of 50 μM amiloride. Furthermore, neither L: -cis-diltiazem nor PsTx affected the β(2)-adrenoceptor agonist-stimulated lung liquid absorption, but, as expected, amiloride completely ablated it. Thus, transport through alveolar CNGA1 channels, located in type I cells, underlies the amiloride-insensitive component of lung liquid reabsorption. Furthermore, our in situ data highlight the potential of CNGA1 as a novel therapeutic target for the treatment of diseases characterised by lung liquid overload.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combined structural analysis and oxygen isotope thermometry of syntectonic quartz-calcite fibrous veins can be used to correlate the thermal history of deformed rocks,vith specific structural and tectonic events. Results are presented for the Mercies nappe in the western Helvetic Alps, Switzerland, where mineral parageneses, illite `'crystallinity,'' and fluid inclusion chemistry record an apparent peak metamorphic temperature gradient that increased across the Morcles nappe from anchizonal conditions in the foreland to epizonal conditions in its hinterland root zone. Twenty-seven quartz-calcite veins were analyzed in this study in order to determine the temperatures of veining during formation and deformation of the nappe, Peak metamorphic temperatures ranged from approximate to 260 to 290 degrees C in the shallower, foreland localities and to approximate to 330 to 350 degrees C in the deeper, more hinterland localities at the end of S1-foliation formation, related to large-scale folding. Temperatures gradually decreased throughout the nappe during subsequent development of the S2 foliation and S3 crenulation cleavage, Uplift and erosion of the overlying nappe pile resulted in slow cooling of the Morcles nappe during the waning stages of the Alpine Orogeny. The dominant foliation-forming deformation of the Morcles nappe occurred at elevated temperatures over the course of 10 to 15 Ma. Combined structure-oxygen isotope analyses of quartz-calcite veins yield better temperature and temporal constraints on the thermal histories of subgreenschist vein-bearing tectonites than do other geothermometers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many eukaryote organisms are polyploid. However, despite their importance, evolutionary inference of polyploid origins and modes of inheritance has been limited by a need for analyses of allele segregation at multiple loci using crosses. The increasing availability of sequence data for nonmodel species now allows the application of established approaches for the analysis of genomic data in polyploids. Here, we ask whether approximate Bayesian computation (ABC), applied to realistic traditional and next-generation sequence data, allows correct inference of the evolutionary and demographic history of polyploids. Using simulations, we evaluate the robustness of evolutionary inference by ABC for tetraploid species as a function of the number of individuals and loci sampled, and the presence or absence of an outgroup. We find that ABC adequately retrieves the recent evolutionary history of polyploid species on the basis of both old and new sequencing technologies. The application of ABC to sequence data from diploid and polyploid species of the plant genus Capsella confirms its utility. Our analysis strongly supports an allopolyploid origin of C. bursa-pastoris about 80 000 years ago. This conclusion runs contrary to previous findings based on the same data set but using an alternative approach and is in agreement with recent findings based on whole-genome sequencing. Our results indicate that ABC is a promising and powerful method for revealing the evolution of polyploid species, without the need to attribute alleles to a homeologous chromosome pair. The approach can readily be extended to more complex scenarios involving higher ploidy levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Liesberg Beds form the transition between the lower Oxfordian dark coloured marls (Renggeri Member and the Terrain a Chailles Member) and the middle Oxfordian reefal limestones (St-Ursanne Formation). Both lithofacies and biofacies are diverse and evolve rapidly up-section. Stable isotope studies of whole-rock samples are therefore excluded. In search for a convenient isotopic marker, we measured carbon isotope compositions of several fossil groups and chose crinoid stems of Millericrinus spp and echinoid spines of Paracidaris spp because of their abundance throughout the section and the small variations of delta(13)C within one fossil and between fossils from the same stratigraphic level. The delta(13)C values of echinoderms largely reflect earliest diagenetic conditions at the seawatersediment interface. The porous stereome structure secreted of high Mg-calcite by echinoderms has a high reactive surface/volume ratio, which triggers the precipitation of very early syntaxial cements. In the four studied sections reproducible carbon isotope shifts were observed both for Millericrinus spp stems and Paracidaris spp spines. A negative delta(13)C shift of 1-1.5 parts per thousand was observed near the base of the section, just above the transition from Terrain a Chailles Member, where the first corals occur. In the middle and upper part of the four sections, characterised by a stepwise increase of corals and the macrofossils, a positive delta(13)C Shift of about 2 parts per thousand was observed. Despite the highly variable lithologic composition of the Liesberg Beds;Member, carbon isotope shifts seem to be consistent and warrant an interpretation as an original signal, controlled by the isotopic composition of dissolved carbonic acid in seawater. We explain the heavy delta(13)C values (approximate to 2-2.3 parts per thousand) in the lower Liesberg Beds as a transition from an oxygen-limited environment (Terrain a Chailles Member) to the Liesberg Beds Member. The lowest delta(13)C values (approximate to 1-1.5 parts per thousand) correspond to a large input of dissolved nutrients to the platform under oxidizing conditions. The ensuing positive shift (between 2.5 and 3.5 parts per thousand), however, seems to correspond to a general trend of opening up of the platform and connection to open marine waters. Positive delta(13)C values in the upper Liesberg Beds is interpreted as a result of important accelareted extraction of organic carbon from the ocean reservoir, that occurred possibly during periods of warm and humid climate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to investigate a possible association between soybean malate synthase (MS; L-malate glyoxylate-lyase, CoA-acetylating, EC 4.1.3.2) and glyoxysomal malate dehydrogenase (gMDH; (S)-malate: NAD(+) oxidoreductase, EC 1.1.1.37), two consecutive enzymes in the glyoxylate cycle, their elution profiles were analyzed on Superdex 200 HR fast protein liquid chromatography columns equilibrated in low- and high-ionic-strength buffers. Starting with soluble proteins extracted from the cotyledons of 5-d-old soybean seedlings and a 45% ammonium sulfate precipitation, MS and gMDH coeluted on Superdex 200 HR (low-ionic-strength buffer) as a complex with an approximate relative molecular mass (M(r)) of 670000. Dissociation was achieved in the presence of 50 mM KCl and 5 mM MgCl2, with the elution of MS as an octamer of M, 510 000 and of gMDH as a dimer of M, 73 000. Polyclonal antibodies raised to the native copurified enzymes recognized both denatured MS and gMDH on immunoblots, and their native forms after gel filtration. When these antibodies were used to screen a lambda ZAP II expression library containing cDNA from 3-d-old soybean cotyledons, they identified seven clones encoding gMDH, whereas ten clones encoding MS were identified using an antibody to SDS-PAGE-purified MS. Of these cDNA clones a 1.8 kb clone for MS and a 1.3-kb clone for gMDH were fully sequenced. While 88% identity was found between mature soybean gMDH and watermelon gMDH, the N-terminal transit peptides showed only 37% identity. Despite this low identity, the soybean gMDH transit peptide conserves the consensus R(X(6))HL motif also found in plant and mammalian thiolases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.