628 resultados para Maximizing
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Tutkimuksen tavoitteena oli tarkastella Inkeroisten kartonkitehtaan arkkileikkausprosessin tehokkuutta jälkikäsittelyosaston näkökulmasta. Työn tarkastelu keskittyi koneiden hyötysuhteiden ja arkituskustannusten perusteella tapahtuvaan tilausten leikkauskustannusten optimointiin. Tavoitteena oli kehittää tuotannonsuunnittelun apuvälineeksi leikkauskustannusten optimointimalli ja suorittaa vaikutusarviointi tuotannonsuunnittelun vakiintuneille toimintatavoille.Tilastotiedon perusteella on tarkasteltu tämän hetkistä tehokkuuden tasoa sekä laskettu kustannusvaikutukset. Kirjallisuusosuudessa on tarkasteltu toiminnanohjauksen teoriaa ja tehokkuuslaskennan menetelmiä, joiden pohjalta on pyritty kehittämään arkkileikkausprosessin tehokkuutta. Leikkauskustannuksiin vaikutti tutkimuksen mukaan ennen kaikkea arkituksen tehokkuus. Reunanauhoista aiheutuvan hylyn vaikutus oli huomattavasti pienempi. Maksimoimalla arkituksen tehokkuutta saavutettiin vähintään 20 % kustannussäästö verrattuna pituusleikkauksen hylkykustannuksen minimoimisesta aiheutuvaan kustannussäästöön.
Resumo:
Diagnosis of community acquired legionella pneumonia (CALP) is currently performed by means of laboratory techniques which may delay diagnosis several hours. To determine whether ANN can categorize CALP and non-legionella community-acquired pneumonia (NLCAP) and be standard for use by clinicians, we prospectively studied 203 patients with community-acquired pneumonia (CAP) diagnosed by laboratory tests. Twenty one clinical and analytical variables were recorded to train a neural net with two classes (LCAP or NLCAP class). In this paper we deal with the problem of diagnosis, feature selection, and ranking of the features as a function of their classification importance, and the design of a classifier the criteria of maximizing the ROC (Receiving operating characteristics) area, which gives a good trade-off between true positives and false negatives. In order to guarantee the validity of the statistics; the train-validation-test databases were rotated by the jackknife technique, and a multistarting procedure was done in order to make the system insensitive to local maxima.
Resumo:
This final thesis project was carried out in the Industrial Management department of University of Applied Sciences Stadia for Forum Virium Helsinki. The purpose of this study was to answer to the question of how companies can use online customer community of co-creation in service development and what is the value gained from it. The paper combines a range of recently published theoretical works and ongoing customer community case development. The study aims to provide new information and action approaches to new service developers that may increase the success of the community building process. The paper also outlines the benefits of the use of online customer community and offers practical suggestions for maximizing the value gained from the community in service development projects. The concepts and suggestions introduced in the study appear to have notable new possibilities to the service development process but they have to be further tested empirically. This paper describes the online consumer community of co-creation to an important organizational process of innovation management suggesting that it possesses a great value to business. Online customer communities offer a potential of improving the success of new services or products enabling early, penetrable market entry and creating sustainable competitive advantage.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Laitekaappien integrointi koostuu moduulien sekä kaapeleiden liittämisestä mallikohtaisiksi kokonaisuuksiksi. Tämä kokoonpanoprosessi on tilausohjautuva ja tehdään mallikohtaisesti yksittäiskokoonpanona. Mallien integrointityön vaikeus ja kokoonpanoaika vaihtelevat voimakkaasti. Tämä yhdistettynä työvoiman vaihtuvuuteen luo haastavan ympäristön kehittää tuotantoa sekä laadun että kapasiteetin näkökulmasta. Työssä on selvitetty voidaanko näitä kehittää jakamalla tuotantoprosessi pienempiin vaiheisiin, jotka ovat helpompi tasapainottaa ja oppia. Kokoomalinjan soveltaminen tilausohjautuvaan tuotantoon vaatii perinteiseen tahdistettuun kokoomalinjaan nähden suurempien poikkeavuuksien sallimista. Toisistaan merkittävästi poikkeavien työaikojen ja laajan mallivariaation vuoksi linjaa ei pystytä hallitsemaan niin järjestelmällisesti kuin tasapituisilla työvaiheilla. Tehokkaan tuotannon aikaansaaminen tällaiselle linjalle vaatii mahdollisuutta työjärjestyksen suunnitteluun ja sen simulointiin. Tässä työssä on pyritty arvioimaan simuloinnin avulla kokoomalinjan toimivuutta stokastisen kysynnän vallitessa. Malli on luotu hyväksikäyttäen tuotteiden valmistusaikoja, jotka on jaettu mallikohtaisesti kaikkiin mahdollisiin työtehtäviin. Nämä tehtävät on pyritty tasapainottamaan eri työpisteiden tehtäviksi. Tasapainotuksen tavoitteena on ollut minimoida tuotteiden työtehtävien keston voimakasta hajontaa, jota mallien kysynnän satunnaisuus voimistaa. Simulointien perusteella on luotu yksinkertaistettu sääntö työjärjestyksen muodostamiselle. Mallinnuksessa on pyritty maksimoimaan tuotannon tehokkuus minimoiden sekä keskeneräisen tuotannon määrää että läpimenoaikaa. Tehokkaimman vaihtoehdon löydyttyä on arvioitu kokoomalinjan soveltuvuutta laitekaappien integrointiin.
Resumo:
Patients with metastatic prostate cancer (PC) represent a heterogeneous group with survival rates varying between 13 and 75 months. The current standard treatment in this setting is hormonal therapy, with or without docetaxel-based chemotherapy. In the era of individualized medicine, however, maximizing treatment options, especially in long-term surviving patients with limited disease burden, is of capital importance. Emerging data, mainly from retrospective surgical series, show survival benefits in men diagnosed with metastatic PC following definitive therapy for the prostate. Whether the irradiation of primary tumor in a metastatic disease might improve the therapeutic ratio in association with systemic treatments remains investigational. In this scenario, modern radiation therapy (RT) can play a significant role owing to its intrinsic capability to act as a more general immune response modifier, as well as to the potentially better toxicity profile compared to surgery. Preclinical data, clinical experience, and challenges in local treatment in de novo metastatic PC are reviewed and discussed.
Resumo:
This paper departs from the standard profit-maximizing model of firm behavior by assuming that firms are motivated in part by personal animosity-or respect-towards their competitors. A reciprocal firm responds to unkind behavior of rivals with unkind actions (negative reciprocity), while at the same time, it responds to kind behavior of rivals with kind actions (positive reciprocity). We find that collusion is easier to sustain when firms have a concern for reciprocity towards competing firms provided that they consider collusive prices to be kind and punishment prices to be unkind. Thus, reciprocity concerns among firms can have adverse welfare consequences for consumers.
Resumo:
In this paper we investigate the optimal choice of prices and/or exams by universities in the presence of credit constraints. We first compare the optimal behavior of a public, welfare maximizing, monopoly and a private, profit maximizing, monopoly. Then we model competition between a public and a private institution and investigate the new role of exams/prices in this environment. We find that, under certain circumstances, the public university may have an interest to raise tuition fees from minimum levels if it cares for global welfare. This will be the case provided that (i) the private institution has higher quality and uses only prices to select applicants, or (ii) the private institution has lower quality and uses also exams to select students. When this is the case, there are efficiency grounds for raising public prices
Resumo:
The Kyoto protocol allows Annex I countries to deduct carbon sequestered by land use, land-use change and forestry from their national carbon emissions. Thornley and Cannell (2000) demonstrated that the objectives of maximizing timber and carbon sequestration are not complementary. Based on this finding, this paper determines the optimal selective management regime taking into account the underlying biophysical and economic processes. The results show that the net benefits of carbon storage only compensate the decrease in net benefits of timber production once the carbon price has exceeded a certain threshold value. The sequestration costs are significantly lower than previous estimates
Resumo:
For economical and ecological reasons, synthetic chemists are confronted with the increasing obligation of optimizing their synthetic methods. Maximizing efficiency and minimizing costs in the production of molecules and macromolecules constitutes, therefore, one of the most exciting challenges of synthetic chemistry. The ideal synthesis should produce the desired product in 100% yield and selectivity, in a safe and environmentally acceptable process. In this highlight the concepts of atom economy, molecular engineering and biphasic organometallic catalysis, which address these issues at the molecular level for the generation of "green" technologies, are introduced and discussed.
Resumo:
The optimal design of a heat exchanger system is based on given model parameters together with given standard ranges for machine design variables. The goals set for minimizing the Life Cycle Cost (LCC) function which represents the price of the saved energy, for maximizing the momentary heat recovery output with given constraints satisfied and taking into account the uncertainty in the models were successfully done. Nondominated Sorting Genetic Algorithm II (NSGA-II) for the design optimization of a system is presented and implemented inMatlab environment. Markov ChainMonte Carlo (MCMC) methods are also used to take into account the uncertainty in themodels. Results show that the price of saved energy can be optimized. A wet heat exchanger is found to be more efficient and beneficial than a dry heat exchanger even though its construction is expensive (160 EUR/m2) compared to the construction of a dry heat exchanger (50 EUR/m2). It has been found that the longer lifetime weights higher CAPEX and lower OPEX and vice versa, and the effect of the uncertainty in the models has been identified in a simplified case of minimizing the area of a dry heat exchanger.
Resumo:
A simple and low cost device (ca. US$ 150) that comprises two photodiodes fixed in lab-made Perspex flow cell is proposed for chemiluminescence measurements. The characteristics of the device (large observation window and reduced thickness) allow maximizing the amount of the emitted radiation detected. A sensitivity improvement of ca. 50 % was observed by employing two photodiodes for signal measurements. The performance of the device was assessed by the oxidation of luminol by hydrogen peroxide, yielding a linear response within the range of 2.50 to 500 µmol L-1 H2O2. The detection limit was estimated as 0.8 µmol L-1 hydrogen peroxide which is comparable with those obtained by using equipments based on photomultipliers.
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
BACKGROUND: Over the last 20 years, a number of instruments developed for the assessment of health-related quality of life (HRQL) in dementia have been introduced. The aim of this review is to synthesize evidence from published reviews on HRQL measures in dementia and any new literature in order to identify dementia specific HRQL instruments, the domains they measure, and their operationalization. METHODS: An electronic search of PsycINFO and PubMed was conducted, from inception to December 2011 using a combination of key words that included quality of life and dementia. RESULTS: Fifteen dementia-specific HRQL instruments were identified. Instruments varied depending on their country of development/validation, dementia severity, data collection method, operationalization of HRQL in dementia, psychometric properties, and the scoring. The most common domains assessed include mood, self-esteem, social interaction, and enjoyment of activities. CONCLUSIONS: A number of HRQL instruments for dementia are available. The suitability of the scales for different contexts is discussed. Many studies do not specifically set out to measure dementia-specific HRQL but do include related items. Determining how best to operationalize the many HRQL domains will be helpful for mapping measures of HRQL in such studies maximizing the value of existing resources.