928 resultados para Yosida Approximate
Resumo:
In order to investigate a possible association between soybean malate synthase (MS; L-malate glyoxylate-lyase, CoA-acetylating, EC 4.1.3.2) and glyoxysomal malate dehydrogenase (gMDH; (S)-malate: NAD(+) oxidoreductase, EC 1.1.1.37), two consecutive enzymes in the glyoxylate cycle, their elution profiles were analyzed on Superdex 200 HR fast protein liquid chromatography columns equilibrated in low- and high-ionic-strength buffers. Starting with soluble proteins extracted from the cotyledons of 5-d-old soybean seedlings and a 45% ammonium sulfate precipitation, MS and gMDH coeluted on Superdex 200 HR (low-ionic-strength buffer) as a complex with an approximate relative molecular mass (M(r)) of 670000. Dissociation was achieved in the presence of 50 mM KCl and 5 mM MgCl2, with the elution of MS as an octamer of M, 510 000 and of gMDH as a dimer of M, 73 000. Polyclonal antibodies raised to the native copurified enzymes recognized both denatured MS and gMDH on immunoblots, and their native forms after gel filtration. When these antibodies were used to screen a lambda ZAP II expression library containing cDNA from 3-d-old soybean cotyledons, they identified seven clones encoding gMDH, whereas ten clones encoding MS were identified using an antibody to SDS-PAGE-purified MS. Of these cDNA clones a 1.8 kb clone for MS and a 1.3-kb clone for gMDH were fully sequenced. While 88% identity was found between mature soybean gMDH and watermelon gMDH, the N-terminal transit peptides showed only 37% identity. Despite this low identity, the soybean gMDH transit peptide conserves the consensus R(X(6))HL motif also found in plant and mammalian thiolases.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.
Resumo:
By an exponential sum of the Fourier coefficients of a holomorphic cusp form we mean the sum which is formed by first taking the Fourier series of the said form,then cutting the beginning and the tail away and considering the remaining sum on the real axis. For simplicity’s sake, typically the coefficients are normalized. However, this isn’t so important as the normalization can be done and removed simply by using partial summation. We improve the approximate functional equation for the exponential sums of the Fourier coefficients of the holomorphic cusp forms by giving an explicit upper bound for the error term appearing in the equation. The approximate functional equation is originally due to Jutila [9] and a crucial tool for transforming sums into shorter sums. This transformation changes the point of the real axis on which the sum is to be considered. We also improve known upper bounds for the size estimates of the exponential sums. For very short sums we do not obtain any better estimates than the very easy estimate obtained by multiplying the upper bound estimate for a Fourier coefficient (they are bounded by the divisor function as Deligne [2] showed) by the number of coefficients. This estimate is extremely rough as no possible cancellation is taken into account. However, with small sums, it is unclear whether there happens any remarkable amounts of cancellation.
Resumo:
For a massless fluid (density = 0), the steady flow along a duct is governed exclusively by viscous losses. In this paper, we show that the velocity profile obtained in this limit can be used to calculate the pressure drop up to the first order in density. This method has been applied to the particular case of a duct, defined by two plane-parallel discs. For this case, the first-order approximation results in a simple analytical solution which has been favorably checked against numerical simulations. Finally, an experiment has been carried out with water flowing between the discs. The experimental results show good agreement with the approximate solution
Resumo:
In recent years, new analytical tools have allowed researchers to extract historical information contained in molecular data, which has fundamentally transformed our understanding of processes ruling biological invasions. However, the use of these new analytical tools has been largely restricted to studies of terrestrial organisms despite the growing recognition that the sea contains ecosystems that are amongst the most heavily affected by biological invasions, and that marine invasion histories are often remarkably complex. Here, we studied the routes of invasion and colonisation histories of an invasive marine invertebrate Microcosmus squamiger (Ascidiacea) using microsatellite loci, mitochondrial DNA sequence data and 11 worldwide populations. Discriminant analysis of principal components, clustering methods and approximate Bayesian computation (ABC) methods showed that the most likely source of the introduced populations was a single admixture event that involved populations from two genetically differentiated ancestral regions - the western and eastern coasts of Australia. The ABC analyses revealed that colonisation of the introduced range of M. squamiger consisted of a series of non-independent introductions along the coastlines of Africa, North America and Europe. Furthermore, we inferred that the sequence of colonisation across continents was in line with historical taxonomic records - first the Mediterranean Sea and South Africa from an unsampled ancestral population, followed by sequential introductions in California and, more recently, the NE Atlantic Ocean. We revealed the most likely invasion history for world populations of M. squamiger, which is broadly characterized by the presence of multiple ancestral sources and non-independent introductions within the introduced range. The results presented here illustrate the complexity of marine invasion routes and identify a cause-effect relationship between human-mediated transport and the success of widespread marine non-indigenous species, which benefit from stepping-stone invasions and admixture processes involving different sources for the spread and expansion of their range.
Improving coronary artery bypass graft durability: use of the external saphenous vein graft support.
Resumo:
Coronary bypass grafting remains the best option for patients suffering from multivessel coronary artery disease, and the saphenous vein is used as an additional conduit for multiple complete revascularizations. However, the long-term vein graft durability is poor, with almost 75% of occluded grafts after 10 years. To improve the durability, the concept of an external supportive structure was successfully developed during the last years: the eSVS Mesh device (Kips Bay Medical) is an external support for vein graft made of weft-knitted nitinol wire into a tubular form with an approximate length of 24 cm and available in three diameters (3.5, 4.0 and 4.5 mm). The device is placed over the outer wall of the vein and carefully deployed to cover the full length of the graft. The mesh is flexible for full adaptability to the heart anatomy and is intended to prevent kinking and dilatation of the vein in addition to suppressing the intima hyperplasia induced by the systemic blood pressure. The device is designed to reduce the vein diameter of about 15-20% at most to prevent the vein radial expansion induced by the arterial blood pressure, and the intima hyperplasia leading to the graft failure. We describe the surgical technique for preparing the vein graft with the external saphenous vein graft support (eSVS Mesh) and we share our preliminary clinical results.
Resumo:
Changes in the angle of illumination incident upon a 3D surface texture can significantly alter its appearance, implying variations in the image texture. These texture variations produce displacements of class members in the feature space, increasing the failure rates of texture classifiers. To avoid this problem, a model-based texture recognition system which classifies textures seen from different distances and under different illumination directions is presented in this paper. The system works on the basis of a surface model obtained by means of 4-source colour photometric stereo, used to generate 2D image textures under different illumination directions. The recognition system combines coocurrence matrices for feature extraction with a Nearest Neighbour classifier. Moreover, the recognition allows one to guess the approximate direction of the illumination used to capture the test image
Resumo:
The approaches are part of the everyday of the Physical Chemistry. In many didactic books in the area of Chemistry, the approaches are validated starting from qualitative and not quantitative approaches. We elaborated some examples that allow evaluating the quantitative impact of the approaches, being considered the mistake tolerated for the approximate calculation. The estimate of the error in the approaches should serve as guide to establish the validity of the calculation, which use them. Thus, the shortcut that represents a calculation approached to substitute accurate calculations; it can be used without it loses of quality in the results, besides indicating, as they are valid the adopted criterions.
Resumo:
Varastotuotannon keskeinen osa on keräystyö, joka edustaa joissain tapauksissa jopa 50 % logistiikan kustannuksista. Jo pienelläkin tehostamisella päästään huomattaviin säästöihin. Keräystyö on kuitenkin yksinkertaisesta perusajatuksesta huolimatta usein monimutkaisen prosessin osa, joka on vahvasti riippuvainen tietojärjestelmistä ja toimintamalleista. Tässä diplomityössä rakennetaan kaupan alan logistiikkapalvelujen tuottajalle simulaatio-ohjelmisto, jolla on mahdollista tutkia tietojärjestelmän tiettyjen parametrien ja varaston layoutin ja artikkelinsijoittelun vaikutuksia keräystyöhön. Työssä tarkastellaan myös tilaajayrityksen varaston uudelleen organisoinnin yhteydessä tehtyjen simulaatioiden tuloksia ja verrataan niitä käytännön toteutuneeseen tilanteeseen. Varaston uudelleen organisoinnin yhteydessä tehdyt simulaatiot osoittavat, että simulaatio-ohjelmistolla voidaan simuloida tilaajayrityksen varaston keräyseriä ja -lenkkejä kohtalaisen tarkasti, kun varaston artikkeleja siirretään keräysalueelta toiselle. Simulaattorilla oli mahdollista arvioida keräyserien tunnuslukujen muutosta ja keräysmatkojen muutoksia. Työn tilaajayrityksessä tehtiin simulaation tuottamien keräyserän rakenteiden ja suoritteiden muutosten pohjalta päätös kahden eri artikkelisijoittelu vaihtoehdon välillä. Muutoksen jälkeen mitatut tulokset osoittautuivat keräysaluekohtaisesti hyvin samansuuntaisiksi, kuin simulaattorin tuottamat tulokset.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.
Resumo:
The directional consistency and skew-symmetry statistics have been proposed as global measurements of social reciprocity. Although both measures can be useful for quantifying social reciprocity, researchers need to know whether these estimators are biased in order to assess descriptive results properly. That is, if estimators are biased, researchers should compare actual values with expected values under the specified null hypothesis. Furthermore, standard errors are needed to enable suitable assessment of discrepancies between actual and expected values. This paper aims to derive some exact and approximate expressions in order to obtain bias and standard error values for both estimators for round-robin designs, although the results can also be extended to other reciprocal designs.
Resumo:
In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation
Resumo:
This thesis introduces a search for a new design of the frame for a permanent magnet generator mounted at a windmill. The objective of this work is to offer new design ideas for the stator frame - new concepts for connecting stator core to stator frame in a generator. Desired aims of new design concepts are: simplification of the structure production; decrease of material use; use of standard components; light weight of construction and etc. Thesis contains several new possible designs for the stator frame structure. Also, it has a list of possible connection concepts, which can be used to join the stator to the frame. All new ideas are described and compared according to its match to the desired purposes of the work. New design concepts are modeled using modern software. The main part of the Thesis contains several approximate computer models of the current and new offered constructions, description of loads and stress in the current stator frame. It has evaluation of the most important stress and load characteristics. The final design is a result of all previous research. It has a description of a new frame structure and joining concept for it. This structure matched main aims of work, but it does not have detailed design with dimensions and check calculations of the frame and welds. Thesis gives representation about design search, evaluation and comparison of new concepts of generator structure. Also, it gives general representation of renewable energy technology, knowledge about windmill turbines and its contents.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.