965 resultados para A posteriori error estimation
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
Les modèles hydrologiques développés pour les pluies extrêmes de type PMP sont difficiles à paramétrer en raison du manque de données disponibles pour ces évènements et de la complexité du terrain. Cet article présente les processus et les résultats de l'ajustement des paramètres pour un modèle hydrologique distribué. Ce modèle à une échelle fine a été développé pour l'estimation des crues maximales probables dans le cas d'une PMP. Le calcul effectué pour deux bassins versants test suisses et pour deux épisodes d'orages d'été concerne l'estimation des paramètres du modèle, divisé en deux groupes. La première concerne le calcul des vitesses des écoulements et l'autre la détermination de la capacité d'infiltration initiale et finale pour chaque type de sol. Les résultats validés avec l'équation de Nash montrent une bonne corrélation entre les débits simulés et ceux observés.
Resumo:
Estimer la filtration glomérulaire chez les personnes âgées, tout en tenant compte de la difficulté supplémentaire d'évaluer leur masse musculaire, est difficile et particulièrement important pour la prescription de médicaments. Le taux plasmatique de la creatinine dépend à la fois de la fraction d'élimination rénale et extra-rénale et de la masse musculaire. Actuellement, pour estimer là filtration glomérulaire différentes formules sont utilisées, qui se fondent principalement sur la valeur de la créatinine. Néanmoins, en raison de la fraction éliminée par les voies tubulaires et intestinales la clairance de la créatinine surestime généralement le taux de filtration glomérulaire (GFR). Le but de cette étude est de vérifier la fiabilité de certains marqueurs et algorithmes de la fonction rénale actuellement utilisés et d'évaluer l'avantage additionnel de prendre en considération la masse musculaire mesurée par la bio-impédance dans une population âgée (> 70 ans) et avec une fonction rénale chronique compromise basée sur MDRD eGFR (CKD stades lll-IV). Dans cette étude, nous comparons 5 équations développées pour estimer la fonction rénale et basées respectivement sur la créatinine sérique (Cockcroft et MDRD), la cystatine C (Larsson), la créatinine combinée à la bêta-trace protéine (White), et la créatinine ajustée à la masse musculaire obtenue par analyse de la bio-impédance (MacDonald). La bio-impédance est une méthode couramment utilisée pour estimer la composition corporelle basée sur l'étude des propriétés électriques passives et de la géométrie des tissus biologiques. Cela permet d'estimer les volumes relatifs des différents tissus ou des fluides dans le corps, comme par exemple l'eau corporelle totale, la masse musculaire (=masse maigre) et la masse grasse corporelle. Nous avons évalué, dans une population âgée d'un service interne, et en utilisant la clairance de l'inuline (single shot) comme le « gold standard », les algorithmes de Cockcroft (GFR CKC), MDRD, Larsson (cystatine C, GFR CYS), White (beta trace protein, GFR BTP) et Macdonald (GFR = ALM, la masse musculaire par bio-impédance. Les résultats ont montré que le GFR (mean ± SD) mesurée avec l'inuline et calculée avec les algorithmes étaient respectivement de : 34.9±20 ml/min pour l'inuline, 46.7±18.5 ml/min pour CKC, 47.2±23 ml/min pour CYS, 54.4±18.2ml/min pour BTP, 49±15.9 ml/min pour MDRD et 32.9±27.2ml/min pour ALM. Les courbes ROC comparant la sensibilité et la spécificité, l'aire sous la courbe (AUC) et l'intervalle de confiance 95% étaient respectivement de : CKC 0 68 (055-0 81) MDRD 0.76 (0.64-0.87), Cystatin C 0.82 (0.72-0.92), BTP 0.75 (0.63-0.87), ALM 0.65 (0.52-0.78). ' En conclusion, les algorithmes comparés dans cette étude surestiment la GFR dans la population agee et hospitalisée, avec des polymorbidités et une classe CKD lll-IV. L'utilisation de l'impédance bioelectrique pour réduire l'erreur de l'estimation du GFR basé sur la créatinine n'a fourni aucune contribution significative, au contraire, elle a montré de moins bons résultats en comparaison aux autres equations. En fait dans cette étude 75% des patients ont changé leur classification CKD avec MacDonald (créatinine et masse musculaire), contre 49% avec CYS (cystatine C), 56% avec MDRD,52% avec Cockcroft et 65% avec BTP. Les meilleurs résultats ont été obtenus avec Larsson (CYS C) et la formule de Cockcroft.
Resumo:
Resveratrol has been shown to have beneficial effects on diseases related to oxidant and/or inflammatory processes and extends the lifespan of simple organisms including rodents. The objective of the present study was to estimate the dietary intake of resveratrol and piceid (R&P) present in foods, and to identify the principal dietary sources of these compounds in the Spanish adult population. For this purpose, a food composition database (FCDB) of R&P in Spanish foods was compiled. The study included 40 685 subjects aged 3564 years from northern and southern regions of Spain who were included in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Spain cohort. Usual food intake was assessed by personal interviews using a computerised version of a validated diet history method. An FCDB with 160 items was compiled. The estimated median and mean of R&P intake were 100 and 933 mg/d respectively. Approximately, 32% of the population did not consume RΠ The most abundant of the four stilbenes studied was trans-piceid (53·6 %), followed by trans-resveratrol (20·9 %), cis-piceid (19·3 %) and cis-resveratrol (6·2 %). The most important source of R&P was wines (98·4 %) and grape and grape juices (1·6 %), whereas peanuts, pistachios and berries contributed to less than 0·01 %. For this reason the pattern of intake of R&P was similar to the wine pattern. This is the first time that R&P intake has been estimated in a Mediterranean country.
Resumo:
La problemàtica jurídica-social que ha sorgit aquests darrers anys amb les permutes financeres i les participacions preferents ha fet plantejar si s'ha produït un error en el consentiment contractual amb aquest tipus de productes financers. A partir del contingut del Codi Civil espanyol i la doctrina, s'han analitzat els elements essencials del contracte, així com, la legislació aplicable als instruments financers. Amb l’ ajuda de la jurisprudència s'ha pogut comprovar que en la majoria de casos portats als tribunals en relació a aquests contractes, en els quals, es demana l'anul·labilitat contractual, el fonament principal es basa en la vulneració de les entitats de crèdit dels seus deures legals . En el present treball queda palesa la importància d'enllaçar l'element contractual del consentiment amb l'obligació que tenen les entitats de crèdit d'informar els seus clients. Així, la incorrecta formació sobre la realitat contractual que els clients manifesten amb el consentiment, passa sense cap dubte per la necessitat d'obtenir tota la informació rellevant del contracte. L’obligació d’informació està estretament lligada al deure de classificar als clients, totes dues són un compromís legal que tenen les entitats en la seva funció de lleialtat empresària. Les entitats financeres deuen per tant classificar els seus clients i proporcionals la informació, amb més rigor si cap , en el cas de clients minoristes. Per tot això, veiem que en aquells casos de clients minoristes en els quals no s'ha pogut demostrar per part de les entitats de crèdit que es va proporcionar tota la informació necessària, s'ha produït un error en el consentiment. Els clients no coneixien l’autèntic abast de la vinculació ni els costos als quals s'havia obligat , no hi ha dubte que en molts dels casos d'haver conegut la realitat, no haguessin contractat.
Resumo:
Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.
Resumo:
The -function and the -function are phenomenological models that are widely used in the context of timing interceptive actions and collision avoidance, respectively. Both models were previously considered to be unrelated to each other: is a decreasing function that provides an estimation of time-to-contact (ttc) in the early phase of an object approach; in contrast, has a maximum before ttc. Furthermore, it is not clear how both functions could be implemented at the neuronal level in a biophysically plausible fashion. Here we propose a new framework the corrected modified Tau function capable of predicting both -type ("") and -type ("") responses. The outstanding property of our new framework is its resilience to noise. We show that can be derived from a firing rate equation, and, as , serves to describe the response curves of collision sensitive neurons. Furthermore, we show that predicts the psychophysical performance of subjects determining ttc. Our new framework is thus validated successfully against published and novel experimental data. Within the framework, links between -type and -type neurons are established. Therefore, it could possibly serve as a model for explaining the co-occurrence of such neurons in the brain.
Resumo:
Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed.As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.
Resumo:
Introduction: « Osteo-Mobile Vaud » is a mobile osteoporosis (OP) screening program. The women > 60 years living in the region Vaud will be offered OP screening with new equipment installed in a bus. The main goal is to evaluate the fracture risk with the combination of clinical risk factors (CRF) and informations extracted by a single DXA: bone mineral density (BMD), vertebral fracture assessment (VFA), and micro-architecture (MA) evaluation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone MA based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. A 55-years follow- up is planned. Method: The Osteo-Mobile Vaud cohort (1500 women, > 60 years, living in the region Vaud) started in July 2010. CRF for OP, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded. Preliminary results are reported. Results: In July 31th, we evaluated 510 women: mean age 67 years, BMI 26 kg/m². 72 women had one or more fragility fractures, 39 had vertebral fracture (VFx) grade 2/3. TBS decreases with age (-0.005 / year, p<0.001), and with BMI (-0.011 per kg/m², p<0.001). Correlation between BMD and site matched TBS is low (r=0.4, p<0.001). For the lowest T-score BMD, odds ratio (OR, 95% CI) for VFx grade 2/3 and clinical OP Fx are 1.8 (1.1-2.9) and 2.3 (1.5-3.4). For TBS, age-, BMI- and BMD adjusted ORs (per SD decrease) for VFx grade 2/3 and clinical OP Fx are 1.9 (1.2-3.0) and 1.8 (1.2-2.7). The TBS added value was independent of lumbar spine BMD or the lowest T-score (femoral neck, total hip or lumbar spine). Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS and BMD may increase significantly the identification of women with prevalent OP Fx. For the first time we are able to have complementary information about fracture (VFA), density (BMD), and micro-architecture (TBS) from a simple, low ionizing radiation and cheap device: DXA. The value of such informations in a screening program will be evaluated.
Resumo:
MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
A priori parameterisation of the CERES soil-crop models and tests against several European data sets
Resumo:
Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.