987 resultados para kernel methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We prove upper pointwise estimates for the Bergman kernel of the weighted Fock space of entire functions in $L^{2}(e^{-2\phi}) $ where $\phi$ is a subharmonic function with $\Delta\phi$ a doubling measure. We derive estimates for the canonical solution operator to the inhomogeneous Cauchy-Riemann equation and we characterize the compactness of this operator in terms of $\Delta\phi$.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In metallurgic plants a high quality metal production is always required. Nowadays soft computing applications are more often used for automation of manufacturing process and quality control instead of mechanical techniques. In this thesis an overview of soft computing methods presents. As an example of soft computing application, an effective model of fuzzy expert system for the automotive quality control of steel degassing process was developed. The purpose of this work is to describe the fuzzy relations as quality hypersurfaces by varying number of linguistic variables and fuzzy sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työn tarkoituksena on tutkia pinon ylikirjoitukseen perustuvien hyökkäysten toimintaa ja osoittaa kokeellisesti nykyisten suojaustekniikoiden olevan riittämättömiä. Tutkimus suoritetaan testaamalla miten valitut tietoturvatuotteet toimivat eri testitilanteissa. Testatut tuotteet ovat Openwall, PaX, Libsafe 2.0 ja Immunix 6.2. Testaus suoritetaan pääasiassa RedHat 7.0 ympäristössä testiohjelman avulla. Testeissä mitataan sekä tuotteiden kyky havaita hyökkäyksiä että niiden nopeusvaikutukset. Myös erityyppisten hyökkäysten ja niitä vastaan kehitettyjen metodien toimintaperiaatteet esitellään seikkaperäisesti ja havainnollistetaan yksinkertaistetuilla esimerkeillä. Esitellyt tekniikat sisältävät puskurin ylivuodot, laittomat muotoiluparametrit, loppumerkittömät merkkijonot ja taulukoiden ylivuodot. Testit osoittavat, etteivät valitut tuotteet estä kaikkia hyökkäyksiä, joten lopuksi perehdytään myös vahinkojen minimointiin onnistuneiden hyökkäysten varalta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä luomistyössä on esitetty tutkimus informaation suojaamisen menetelmien osalta paikallisissa ja ryhmäkuntaisissa verkoissa. Tutkimukseen kuuluu nykyaikaisten kryptagraafisten järjestelmien, Internetin/Intranetin ohjelmointikeinojen ja pääsyoikeuksien jakelumenetelmien analyysi. Tutkimusten perusteella on laadittu ohjelmiston prototyyppi HTML-tiedostojen suojaamista varten. Ohjelmiston laatimisprosessi on sisältänyt vaatimusten, järjestelmän ja suojelukomponenttien suunnittelun ja protytyypin testauksen. Ohjelmiston realisoinnin jälkeen kirjoitettiin käyttöohjeet. Ohjelmiston prototyyppi suojaa informaatiota HTML-tiedoston koko käytön aikana ja eri yrityksissä voidaan käyttää sitä pienien laajennuksien jälkeen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Suihku/viira-nopeussuhde on perälaatikon huulisuihkun ja viiran välinen nopeusero. Se vaikuttaa suuresti paperin ja kartongin loppuominaisuuksiin, kuten formaatioon sekä kuituorientaatioon ja näin ollen paperin lujuusominaisuuksiin. Tämän johdosta on erityisen tärkeää tietää todellinen suihku/viira-nopeussuhde paperin- ja kartonginvalmistuksessa. Perinteinen suihku/viira-nopeussuhteen määritysmenetelmä perustuu perälaatikon kokonaispaineeseen. Tällä menetelmällä kuitenkin todellinen huulisuihkun nopeus saattaa usein jäädä tietämättä johtuen mahdollisesta virheellisestä painemittarin kalibroinnista sekä laskuyhtälön epätarkkuuksista. Tämän johdosta on kehitetty useita reaaliaikaisia huulisuihkun mittausmenetelmiä. Perälaatikon parametrien optimaaliset asetukset ovat mahdollista määrittää ja ylläpitää huulisuihkun nopeuden “on-line” määrityksellä. Perälaatikon parametrejä ovat mm. huulisuihku, huuliaukon korkeusprofiili, reunavirtaukset ja syöttövirtauksen tasaisuus. Huulisuihkun nopeuden on-line mittauksella paljastuu myös muita perälaatikon ongelmakohtia, kuten mekaaniset viat, joita on perinteisesti tutkittu aikaa vievillä paperin ja kartongin lopputuoteanalyyseillä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives : This study compares three methods to forecast the number of acute somatic hospital beds needed in a Swiss academic hospital over the period 2010-2030. Design : Information about inpatient stays is provided through a yearly mandatory reporting of Swiss hospitals, containing anonymized data. Forecast of the numbers of beds needed compares a basic scenario relying on population projections with two other methods in use in our country that integrate additional hypotheses on future trends in admission rates and length of stay (LOS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT Functional genomic analyses require intact RNA; however, Passiflora edulis leaves are rich in secondary metabolites that interfere with RNA extraction primarily by promoting oxidative processes and by precipitating with nucleic acids. This study aimed to analyse three RNA extraction methods, Concert™ Plant RNA Reagent (Invitrogen, Carlsbad, CA, USA), TRIzol® Reagent (Invitrogen) and TRIzol® Reagent (Invitrogen)/ice -commercial products specifically designed to extract RNA, and to determine which method is the most effective for extracting RNA from the leaves of passion fruit plants. In contrast to the RNA extracted using the other 2 methods, the RNA extracted using TRIzol® Reagent (Invitrogen) did not have acceptable A260/A280 and A260/A230 ratios and did not have ideal concentrations. Agarose gel electrophoresis showed a strong DNA band for all of the Concert™ method extractions but not for the TRIzol® and TRIzol®/ice methods. The TRIzol® method resulted in smears during electrophoresis. Due to its low levels of DNA contamination, ideal A260/A280 and A260/A230 ratios and superior sample integrity, RNA from the TRIzol®/ice method was used for reverse transcription-polymerase chain reaction (RT-PCR), and the resulting amplicons were highly similar. We conclude that TRIzol®/ice is the preferred method for RNA extraction for P. edulis leaves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phlorotannins are the least studied group of tannins and are found only in brown algae. Hitherto the roles of phlorotannins, e.g. in plant-herbivore interactions, have been studied by quantifying the total contents of the soluble phlorotannins with a variety of methods. Little attention has been given to either quantitative variation in cell-wall-bound and exuded phlorotannins or to qualitative variation in individual compounds. A quantification procedure was developed to measure the amount of cell-wall-bound phlorotannins. The quantification of soluble phlorotannins was adjusted for both large- and small-scale samples and used to estimate the amounts of exuded phlorotannins using bladder wrack (Fucus vesiculosus) as a model species. In addition, separation of individual soluble phlorotannins to produce a phlorotannin profile from the phenolic crude extract was achieved by high-performance liquid chromatography (HPLC). Along with these methodological studies, attention was focused on the factors in the procedure which generated variation in the yield of phlorotannins. The objective was to enhance the efficiency of the sample preparation procedure. To resolve the problem of rapid oxidation of phlorotannins in HPLC analyses, ascorbic acid was added to the extractant. The widely used colourimetric method was found to produce a variation in the yield that was dependent upon the pH and concentration of the sample. Using these developed, adjusted and modified methods, the phenotypic plasticity of phlorotannins was studied with respect to nutrient availability and herbivory. An increase in nutrients decreased the total amount of soluble phlorotannins but did not affect the cell-wall-bound phlorotannins, the exudation of phlorotannins or the phlorotannin profile achieved with HPLC. The presence of the snail Thedoxus fluviatilis on the thallus induced production of soluble phlorotannins, and grazing by the herbivorous isopod Idotea baltica increased the exudation of phlorotannins. To study whether the among-population variations in phlorotannin contents arise from the genetic divergence or from the plastic response of algae, or both, algae from separate populations were reared in a common garden. Genetic variation among local populations was found in both the phlorotannin profile and the content of total phlorotannins. Phlorotannins were also genetically variable within populations. This suggests that local algal populations have diverged in their contents of phlorotannins, and that they may respond to natural selection and evolve both quantitatively and qualitatively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let $Q$ be a suitable real function on $C$. An $n$-Fekete set corresponding to $Q$ is a subset ${Z_{n1}},\dotsb, Z_{nn}}$ of $C$ which maximizes the expression $\Pi^n_i_{

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.