989 resultados para Objective functions


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most important problems in optical pattern recognition by correlation is the appearance of sidelobes in the correlation plane, which causes false alarms. We present a method that eliminate sidelobes of up to a given height if certain conditions are satisfied. The method can be applied to any generalized synthetic discriminant function filter and is capable of rejecting lateral peaks that are even higher than the central correlation. Satisfactory results were obtained in both computer simulations and optical implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli selvittää miten kilpailuedun lähteet muuttuvat päivittäistavarakaupassa, kun siirrytään ketjutoimintaan. Onko tiedosta kilpailuedun lähteeksi? Tutkimus suoritettiin kirjallisuustutkimuksena. Lähdeaineistona käytettiin kotimaista ja ulkomaista kirjallisuutta, jotka käsittelivät tietoa, tietopääomaa, kilpailuetua ja ketjutoimintaa. Tiedon merkitystä on käsitelty tietoperusteisen teorian näkökulmasta ja analyysin pohjana oli kaksi suurinta toimijaa Suomen päivittäistavaramarkkinoilla. Kilpailu päivittäistavara-alalla on hintakilpailua. Ketjutoimintaan siirtyminen tuo suurtuotantoetuja ostotoimintaan sekä markkinointiin. Kustannussäästöt ovat huomattavat. Ketjutoiminta lisää kaupan hintakilpailukykyä ja parantaa toimintojen tehokkuutta. Tutkimus vahvisti olettamuksen, että tiedosta on tullut kaupan alan tärkein kilpailutekijä. Tämä korostuu etenkin logististen toimintojen sekä asiakashallinnan osalta. Tiedon virtaus on tärkeää ja sitä voidaan parantaa investoimalla tietoteknisiin järjestelmiin sekä henkilöstön koulutukseen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main challenge for gaining biological insights from genetic associations is identifying which genes and pathways explain the associations. Here we present DEPICT, an integrative tool that employs predicted gene functions to systematically prioritize the most likely causal genes at associated loci, highlight enriched pathways and identify tissues/cell types where genes from associated loci are highly expressed. DEPICT is not limited to genes with established functions and prioritizes relevant gene sets for many phenotypes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis studies role based access control and its suitability in the enterprise environment. The aim is to research how extensively role based access control can be implemented in the case organization and how it support organization’s business and IT functions. This study points out the enterprise’s needs for access control, factors of access control in the enterprise environment and requirements for implementation and the benefits and challenges it brings along. To find the scope how extensively role based access control can be implemented into the case organization, firstly is examined the actual state of access control. Secondly is defined a rudimentary desired state (how things should be) and thirdly completed it by using the results of the implementation of role based access control application. The study results the role model for case organization unit, and the building blocks and the framework for the organization wide implementation. Ultimate value for organization is delivered by facilitating the normal operations of the organization whilst protecting its information assets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yrittäjyyden Tuki - projekti toimi osana valtakunnallista WomEqual - kehittämiskumppanuushanketta vuosina 2005-2007 tavoitteenaan tukea naisyrittäjien toimintaedellytyksiä kehittäen heille vertaisryhmätoimintaan perustuvaa tukimallia. Tässä tutkimuksessa ensisijaisena tavoitteena on selvittää, miten kyseisen projektin vertaisryhmätoimintaan osallistuneet naisyrittäjät kokivat naisyrittäjien välisen vertaistuen toimintamuotona. Pyrkimyksenä on myös ymmärtää, mitkä seikat sitouttivat naisyrittäjiä vertaisryhmätoimintaan ja mitä he kokivat oppineensa vertaisryhmätoiminnan seurauksena. Lisäksi kiinnostuksen kohteena on tarkastella, kuinka vertaistukiryhmä toimii naisyrittäjien välisen verkostoitumisen välineenä. Tutkimus on laadullinen ja aineistolähtöinen. Tutkimuksen primääriaineisto on kerätty puolistrukturoiduin teemahaastatteluilla ja sekundaariaineisto kyselytutkimuksen avulla. Tutkimustulokset osoittavat, että naisyrittäjät kokevat henkisen ja sosiaalisen tuen tärkeäksi. Toiselta naisyrittäjältä, vertaiselta, saatu tuki koettiin vertaistukiryhmän vahvuudeksi. Vertaisryhmässä oppiminen tapahtui kokemusten jakamisen kautta. Uusien verkostojen luomisessa vertaistukiryhmä ei täysin vastannut odotuksia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän tutkimuksen tavoitteena oli ymmärtää, miten kohdeyrityksen työntekijöiden osaamisen johtaminen toteutuu. Tutkimus oli luonteeltaan kvalitatiivinen, ja sen empiirinen aineisto koostui kohdeyrityksen kahdeksan esimiehen teemahaastattelusta. Tulosten perusteella voidaan vahvistaa toiminnon osaamisen johtamisen ja osaamisen tunnistamisen toteutuneen toisen kehitysvaiheen mukaisesti. Esimiehet olivat ymmärtäneet työntekijöiden osaamisen merkityksen liiketoimintaa edistävänä tekijänä. Tärkeäksi kehittämisen kohteeksi nousi nykyinen kehityskeskustelukäytäntö, joka tulee muuttaa toiminnon liiketoimintaa ja työntekijöiden osaamista tukevampaan muotoon. Lisäksi esimiehillä oli työntekijöiden osaamisen johtamista varten käytössään erilaisia toimintatapoja ja työvälineitä, joita tulee jatkossa kehittää ja yhtenäistää.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conversion of cellular prion protein (PrPc), a GPI-anchored protein, into a protease-K-resistant and infective form (generally termed PrPsc) is mainly responsible for Transmissible Spongiform Encephalopathies (TSEs), characterized by neuronal degeneration and progressive loss of basic brain functions. Although PrPc is expressed by a wide range of tissues throughout the body, the complete repertoire of its functions has not been fully determined. Recent studies have confirmed its participation in basic physiological processes such as cell proliferation and the regulation of cellular homeostasis. Other studies indicate that PrPc interacts with several molecules to activate signaling cascades with a high number of cellular effects. To determine PrPc functions, transgenic mouse models have been generated in the last decade. In particular, mice lacking specific domains of the PrPc protein have revealed the contribution of these domains to neurodegenerative processes. A dual role of PrPc has been shown, since most authors report protective roles for this protein while others describe pro-apoptotic functions. In this review, we summarize new findings on PrPc functions, especially those related to neural degeneration and cell signaling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Iterative algorithms introduce new challenges in the field of image quality assessment. The purpose of this study is to use a mathematical model to evaluate objectively the low contrast detectability in CT. MATERIALS AND METHODS: A QRM 401 phantom containing 5 and 8 mm diameter spheres with a contrast level of 10 and 20 HU was used. The images were acquired at 120 kV with CTDIvol equal to 5, 10, 15, 20 mGy and reconstructed using the filtered back-projection (FBP), adaptive statistical iterative reconstruction 50% (ASIR 50%) and model-based iterative reconstruction (MBIR) algorithms. The model observer used is the Channelized Hotelling Observer (CHO). The channels are dense difference of Gaussian channels (D-DOG). The CHO performances were compared to the outcomes of six human observers having performed four alternative forced choice (4-AFC) tests. RESULTS: For the same CTDIvol level and according to CHO model, the MBIR algorithm gives the higher detectability index. The outcomes of human observers and results of CHO are highly correlated whatever the dose levels, the signals considered and the algorithms used when some noise is added to the CHO model. The Pearson coefficient between the human observers and the CHO is 0.93 for FBP and 0.98 for MBIR. CONCLUSION: The human observers' performances can be predicted by the CHO model. This opens the way for proposing, in parallel to the standard dose report, the level of low contrast detectability expected. The introduction of iterative reconstruction requires such an approach to ensure that dose reduction does not impair diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous functional MRI (fMRI) studies have associated anterior hippocampus with imagining and recalling scenes, imagining the future, recalling autobiographical memories and visual scene perception. We have observed that this typically involves the medial rather than the lateral portion of the anterior hippocampus. Here, we investigated which specific structures of the hippocampus underpin this observation. We had participants imagine novel scenes during fMRI scanning, as well as recall previously learned scenes from two different time periods (one week and 30 min prior to scanning), with analogous single object conditions as baselines. Using an extended segmentation protocol focussing on anterior hippocampus, we first investigated which substructures of the hippocampus respond to scenes, and found both imagination and recall of scenes to be associated with activity in presubiculum/parasubiculum, a region associated with spatial representation in rodents. Next, we compared imagining novel scenes to recall from one week or 30 min before scanning. We expected a strong response to imagining novel scenes and 1-week recall, as both involve constructing scene representations from elements stored across cortex. By contrast, we expected a weaker response to 30-min recall, as representations of these scenes had already been constructed but not yet consolidated. Both imagination and 1-week recall of scenes engaged anterior hippocampal structures (anterior subiculum and uncus respectively), indicating possible roles in scene construction. By contrast, 30-min recall of scenes elicited significantly less activation of anterior hippocampus but did engage posterior CA3. Together, these results elucidate the functions of different parts of the anterior hippocampus, a key brain area about which little is definitely known.