941 resultados para Biofertilizer and optimization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development and optimization of efficient transformation protocols is essential in new citrus breeding programs, not only for rootstock, but also for scion improvement. Transgenic 'Hamlin' sweet orange (Citrus sinensis (L.) Osbeck) plants were obtained by Agrobacterium tumefaciens-mediated transformation of epicotyl segments collected from seedlings germinated in vitro. Factors influencing genetic transformation efficiency were evaluated including seedling incubation conditions, time of inoculation with Agrobacterium and co-culture conditions. Epicotyl segments were adequate explants for transformation, regenerating plants by direct organogenesis. Higher percentage of transformation was obtained with explants collected from seedlings germinated in darkness, transferred to 16 hours photoperiod for 2-3 weeks, and inoculated with Agrobacterium for 15-45 min. The best co-culture condition was the incubation of the explants in darkness, for three days in culture medium supplemented with 100 muM of acetosyringone. Genetic transformation was confirmed by performing beta-glucoronidase (GUS) assays and, subsequently, by PCR amplification for the nptII and GUS genes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Direct identification as well as isolation of antigen-specific T cells became possible since the development of "tetramers" based on avidin-fluorochrome conjugates associated with mono-biotinylated class I MHC-peptide monomeric complexes. In principle, a series of distinct class I MHC-peptide tetramers, each labelled with a different fluorochrome, would allow to simultaneously enumerate as many unique antigen-specific CD8(+) T cells. Practically, however, only phycoerythrin and allophycocyanin conjugated tetramers have been generally available, imposing serious constraints for multiple labeling. To overcome this limitation, we have developed dextramers which are multimers based on a dextran backbone bearing multiple fluorescein and streptavidin moieties. Here we demonstrate the functionality and optimization of these new probes on human CD8(+) T cell clones with four independent antigen specificities. Their applications to the analysis of relatively low frequency antigen-specific T cells in peripheral blood, as well as their use in fluorescence microscopy, are demonstrated. The data show that dextramers produce a stronger signal than their fluoresceinated tetramer counterparts. Thus, these could become the reagents of choice as the antigen-specific T cell labeling transitions from basic research to clinical application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Thesis gives a decision support framework that has significant impact on the economic performance and viability of a hydropower company. The studyaddresses the short-term hydropower planning problem in the Nordic deregulated electricity market. The basics of the Nordic electricity market, trading mechanisms, hydropower system characteristics and production planning are presented in the Thesis. The related modelling theory and optimization methods are covered aswell. The Thesis provides a mixed integer linear programming model applied in asuccessive linearization method for optimal bidding and scheduling decisions inthe hydropower system operation within short-term horizon. A scenario based deterministic approach is exploited for modelling uncertainty in market price and inflow. The Thesis proposes a calibration framework to examine the physical accuracy and economic optimality of the decisions suggested by the model. A calibration example is provided with data from a real hydropower system using a commercial modelling application with the mixed integer linear programming solver CPLEX.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

 Diplomityön tavoitteena on tutkia ja kehittää muovituotteen valmistuskonseptien verifiointimenetelmiä ja luoda malli, jolla saadaan esituotantovaiheestalähtien parannettu tuotantoprosessi massatuotannon aloittamiseen (ramp up). Työn tavoitteena on myös toimia viestintäkeinona lisäämässä organisaation tietoisuutta esituotantovaiheen tärkeydestä. Työ pohjautuu tekijän aikaisempaan tutkimukseen "esituotantoprosessista oppimisen vaikutukset massatuotannon aloitukseen", joka on tehty erikoistyönä v.2006. Tutkimus menetelminä on käytetty pääasiassa prosessien kuvaamista, benchmarkingia, asiantuntijoiden haastatteluja sekä analysoitu toteutuneita projekteja. Lopputuloksena on saatu toimintamalli, joka vastaa hyvin pitkälle prosessien kuvausten aikana syntynyttä mallia. Keskeisenä ajatuksena on valmistuskonseptin verifioinnin kytkeminen tuotteen mekaniikkasuunnittelun kypsyyteen. Koko projektia koskevien tavoitteiden määrittäminen johdetaan ramp up tavoitteista. Verifioitavaksi valitaan kriittisin tuote ja prosessi. Tähän on teoreettisena viitekehyksenä käytetty Quality Function Deployment (QFD) menetelmää. Jatkotoimenpiteiksi esitetään ramp up tavoitteiden ja mittareiden kehittämistä, joilla pystytään seuraamaan ja ohjaamaan projektia jo heti alusta alkaen. Lisätutkimusta tarvitaan myös esituotannon aikaisten prosessiparametrien suunnitteluun ja optimointiin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diplomityö on tehty UPM-Kymmene Oyj, Kaukaan tehtailla Lappeenrannassa. Integroidussa metsäteollisuudessa energiantuotanto koostuu yleensä sähkön- ja lämmöntuotannosta. Kaukaan tehtailla prosessien lämmöntarve saadaan katettua kokonaisuudessaan omalla tuotannolla, kun taas kulutetusta sähköstä ainoastaan puolet on tuotettu itse. Loput sähköntarpeesta joudutaan ostamaan ulkopuolelta. Tutkimuksen pääpaino on ollut selvittää, miten kustannukset ovat riippuvaisia energiantuotannosta erilaisissa käyttöolosuhteissa. Työn tuloksena on luotu tietokonepohjainen laskentamalli, jonka avulla Kaukaan tehtaiden energiantuotantoa voidaan ohjata taloudellisesti optimaalisimmalla tavalla kulloinkin vallitsevassa käyttötilanteessa. Lisäksi tutkimuksessa on analysoitu tehdasintegraatin lämmönkulutuksen seurannan mahdollisuuksia lämmönsiirtoverkon nykyisten mittausten perusteella. Työssä on kerrottu yleisesti metsäteollisuuden energiankulutuksesta Suomessa. Lisäksi on esitetty arvioita energiankulutuksen kehityksestä tulevaisuudessa sekä keinoja energiatehokkuuden parantamiseksi. Kaukaan tehtailla lämmönkulutuksen seurantaan käytettävät mittausmenetelmät ja -laitteet on esitelty virtausmittausten osalta sekä arvioitu nykyisten mittausten luotettavuutta ja riittävyyttä kokonaisvaltaisen lämpötaseen hallintaan. Kaukaan tehtaiden energiantuotantojärjestelmästä on luotu termodynaaminen malli, johon energiantuotannosta aiheutuneiden kustannusten laskenta perustuu. Energiantuotannon optimoinnilla pyritään määrittelemään tietyn tarkasteluhetken käyttötilanteessa taloudellisesti optimaalisin kattiloiden ajojärjestys. Tarkastelu on rajattu lämmöntuotannon lisäämisen osalta maakaasun käytön lisäämiseen ja höyryturbiinien ohitukseen. Sähkön ja maakaasun hinnan sekä ympäristön lämpötilan vaihtelujen vaikutusta optimaaliseen ajojärjestykseen on havainnollistettu esimerkkien avulla.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä diplomityössä määritellään varmistusjärjestelmän simulointimalli eli varmistusmalli. Varmistusjärjestelmän toiminta optimoidaan kyseisen varmistusmallin avulla. Optimoinnin tavoitteena on parantaa varmistusjärjestelmän tehokkuutta. Parannusta etsitään olemassa olevien varmistusjärjestelmän resurssien maksimaalisella hyödyntämisellä. Varmistusmalli optimoidaan evoluutioalgoritmin avulla. Optimoinnissa on useita tavoitteita, jotka ovat ristiriidassa keskenään. Monitavoiteoptimointiongelma muunnetaan yhden tavoitteen optimointiongelmaksi muodostamalla tavoitefunktio painotetun summan menetelmän avulla. Rinnakkain edellisen menetelmän kanssa käytetään myös Pareto-optimointia. Pareto-optimaalisen rintaman pisteiden etsintä ohjataan lähelle painotetun summan menetelmän optimipistettä. Evoluutioalgoritmin toteutuksessa käytetään hyväksi varmistusjärjestelmiin liittyvää ongelmakohtaista tietoa. Työn tuloksena saadaan varmistusjärjestelmän simulointi- sekä optimointityökalu. Simulointityökalua käytetään kartoittamaan nykyisen varmistusjärjestelmän toimivuutta. Optimoinnin avulla tehostetaan varmistusjärjestelmän toimintaa. Työkalua voidaan käyttää myös uusien varmistusjärjestelmien suunnittelussa sekä nykyisten varmistusjärjestelmien laajentamisessa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Energiantuotantoa koskeva jatkuvasti tiukentuva lainsäädäntö ja yleinen tarve polttaa yhä vaativampia polttoaineita asettaa leijukerroskattiloille suuria haasteita. Eräs käyttökelpoinen työkalu tulevaisuuden haasteisiin vastaamisessa on säädettävä leijukerroslämmönsiirrin. Tiheässä leijukerroksessa toimiva lämmönsiirrin on vähemmän alttiina korroosiolle kuin savukaasukanavistossa sijaitsevat rakenteet. Rakenne tuo tarpeellista säädettävyyttä koko kattilalle, mikäli lämmönsiirtimen antamaa tehoa voidaan säätää tehokkaasti. Erityisen hyvä tilanne saavutetaan, jos lämmönsiirrin kykenee käyttämään hyväkseen tulipesän sisäistä kiintoainekiertoa. Tässä työssä tutkitaan kiertoleijukattilaan sijoitettavan leijukerroslämmönsiirtimen säädön toteuttamista leijutusjärjestelyjen avulla. Teoreettinen tarkastelu keskittyy leijukerroslämmönsiirtimen toimintaolosuhteisiin ja kiintoainevirtauksen säätömahdollisuuksiin. Kokeellisessa osuudessa etsitään käytännössä toimivaa säätöperiaatetta tutkimuksen kohteena olevalle leijukerroslämmönsiirrinrakenteelle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diplomityö on tehty osana ETX-tutkimushanketta: 'Volyymiteholähteen suunnittelumetodien kehitys ja optimointi DFM-viitekehyksessä'. Työssä suunnitellaan hakkuriteholähteelle säätäjä. Tähän suunnittelun sektoriin syventyminen on teollisuudessa jäänyt monesti vähälle. Säätö on tavallisesti ajan puutteen ja apuvälineiden käytön osaamattomuuden tai puuttumisen takia suunniteltu kokeilemalla. Työssä muodostetaan jännitemuotoisesti säädetylle hakkurille piensignaalimallilla linearisoidut siirtofunktiot, joiden perusteella voidaan tarkastella hakkurin stabiilisuutta takaisinkytketyssä säätösilmukassa. Stabiiliustarkastelu tehdään taajuustasossa käyttäen Bode-kuvaajia. Näiden kuvaajien perusteella viritetään järjestelmään säätäjä. Säätäjän toimintaa aikatasossa tarkastellaan simuloimalla ja reaalisen laitteen toimimista laboratorioprototyypin avulla. Tulosten perusteella voidaan todeta, että jännitemuotoisella säädöllä flyback-hakkuri saadaan nopeaksi epäjatkuvalla käämivirralla. Mikäli halutaan hakkurin toimivan jatkuvalla käämivirralla, on syytä käyttää muita säätömenetelmiä, esimerkiksi huippuvirtasäätöä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää perheyrityksen sukupolvenvaihdokseen liittyvä perintö- ja lahjaverotus eri tavoilla toteutetuissa sukupolvenvaihdoksissa. Tutkielma on kvalitatiivinen tutkimus, jossa menetelmien osalta pääpaino oli kirjallisuus- ja oikeustapaustutkimuksessa. Perheyrityksen sukupolvenvaihdoksen ja siihen liittyvien veroseuraamusten optimointi edellyttää suunnitelmallista prosessia, missä otetaan huomioon eri toteutusvaihtoehtojen verotuksellinen kohtelu sekä luopujan että jatkajan kannalta. Perheen sisällä tapahtuvat sukupolvenvaihdokset tapahtuvat useimmiten lahjana, lahjanluonteisena kauppana tai perintönä, jolloin perintö- ja lahjaverotuksessa osakkeiden arvon määrittely ja sukupolvenvaihdoksen yhteydessä saatavat verohuojennukset ja niiden huomioon ottaminen ovat tärkeitä tekijöitä. Yritysmuodon muutoksilla, yhtiön jakautumisella tai yhtiön omien osakkeiden lunastamisella voidaan myös vaikuttaa sukupolvenvaihdoksen verotukseen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The European Forum on Epilepsy Research (ERF2013), which took place in Dublin, Ireland, on May 26-29, 2013, was designed to appraise epilepsy research priorities in Europe through consultation with clinical and basic scientists as well as representatives of lay organizations and health care providers. The ultimate goal was to provide a platform to improve the lives of persons with epilepsy by influencing the political agenda of the EU. The Forum highlighted the epidemiologic, medical, and social importance of epilepsy in Europe, and addressed three separate but closely related concepts. First, possibilities were explored as to how the stigma and social burden associated with epilepsy could be reduced through targeted initiatives at EU national and regional levels. Second, ways to ensure optimal standards of care throughout Europe were specifically discussed. Finally, a need for further funding in epilepsy research within the European Horizon 2020 funding programme was communicated to politicians and policymakers participating to the forum. Research topics discussed specifically included (1) epilepsy in the developing brain; (2) novel targets for innovative diagnostics and treatment of epilepsy; (3) what is required for prevention and cure of epilepsy; and (4) epilepsy and comorbidities, with a special focus on aging and mental health. This report provides a summary of recommendations that emerged at ERF2013 about how to (1) strengthen epilepsy research, (2) reduce the treatment gap, and (3) reduce the burden and stigma associated with epilepsy. Half of the 6 million European citizens with epilepsy feel stigmatized and experience social exclusion, stressing the need for funding trans-European awareness campaigns and monitoring their impact on stigma, in line with the global commitment of the European Commission and with the recommendations made in the 2011 Written Declaration on Epilepsy. Epilepsy care has high rates of misdiagnosis and considerable variability in organization and quality across European countries, translating into huge societal cost (0.2% GDP) and stressing the need for cost-effective programs of harmonization and optimization of epilepsy care throughout Europe. There is currently no cure or prevention for epilepsy, and 30% of affected persons are not controlled by current treatments, stressing the need for pursuing research efforts in the field within Horizon 2020. Priorities should include (1) development of innovative biomarkers and therapeutic targets and strategies, from gene and cell-based therapies to technologically advanced surgical treatment; (2) addressing issues raised by pediatric and aging populations, as well as by specific etiologies and comorbidities such as traumatic brain injury (TBI) and cognitive dysfunction, toward more personalized medicine and prevention; and (3) translational studies and clinical trials built upon well-established European consortia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Combinatorial Chemistry has become a very efficient methodology in drug research. Recent progress in combinatorial synthesis performed both in solid and solution phase have led to a change in the paradigm for the identification and optimization of lead compounds. This article gives an overview of the principal characteristics of combinatorial libraries and some examples of the application of this methodology in the identification of test compounds and lead compound optimization, either from synthetic or natural sources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we review the basic techniques of performance analysis within the UNIX environment that are relevant in computational chemistry, with particular emphasis on the execution profile using the gprof tool. Two case studies (in ab initio and molecular dynamics calculations) are presented in order to illustrate how execution profiling can be used to effectively identify bottlenecks and to guide source code optimization. Using these profiling and optimization techniques it was possible to obtain significant speedups (of up to 30%) in both cases.