897 resultados para estimation of dynamic structural models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the current issue of epidemiology, Danaei and colleagues elegantly estimated both the direct effect and the indirect effect-that is, the effect mediated by blood pressure, cholesterol, glucose, fibrinogen, and high-sensitivity C-reactive protein-of body mass index (BMI) on the risk of coronary heart disease (CHD). they analyzed data from 9 cohort studies including 58,322 patients and 9459 CHD events, with baseline measurements between 1954 and 2001. Using sophisticated and cutting-edge methods for direct and indirect effect estimations, the authors estimated that half of the risk of overweight and obesity would be mediated by blood pressure, cholesterol, and glucose. Few additional percentage points of the risk would be mediated by fibrinogen and hs-CRP. How should we understand these estimates? Can we say that if obese persons reduce their body weight and reach a normal body weight, their excess risk of CHD would be reduced by half through an improvement in these mediators and by half through the reduction in BmI itself? Is that also true if these individuals are prevented from becoming obese in the first place? Can we also conclude that if these mediators are well controlled in obese individuals through other means than a body weight reduction, their excess risk of CHD would be reduced by half? Let us confront these estimates with observations from studies evaluating 2 interventions to reduce body weight, that is, bariatric surgery in patients with severe obesity and intensive lifestyle intervention in overweight patients with diabetes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Selostus: Ravikilpailumenestysmittojen periytymisasteet ja toistumiskertoimet kilpailukohtaisten tulosten perusteella

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The suitable timing of capacity investments is a remarkable issue especially in capital intensive industries. Despite its importance, fairly few studies have been published on the topic. In the present study models for the timing of capacity change in capital intensive industry are developed. The study considers mainly the optimal timing of single capacity changes. The review of earlier research describes connections between cost, capacity and timing literature, and empirical examples are used to describe the starting point of the study and to test the developed models. The study includes four models, which describe the timing question from different perspectives. The first model, which minimizes unit costs, has been built for capacity expansion and replacement situations. It is shown that the optimal timing of an investment can be presented with the capacity and cost advantage ratios. After the unit cost minimization model the view is extended to the direction of profit maximization. The second model states that early investments are preferable if the change of fixed costs is small compared to the change of the contribution margin. The third model is a numerical discounted cash flow model, which emphasizes the roles of start-up time, capacity utilization rate and value of waiting as drivers of the profitable timing of a project. The last model expands the view from project level to company level and connects the flexibility of assets and cost structures to the timing problem. The main results of the research are the solutions of the models and analysis or simulations done with the models. The relevance and applicability of the results are verified by evaluating the logic of the models and by numerical cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimer la filtration glomérulaire chez les personnes âgées, tout en tenant compte de la difficulté supplémentaire d'évaluer leur masse musculaire, est difficile et particulièrement important pour la prescription de médicaments. Le taux plasmatique de la creatinine dépend à la fois de la fraction d'élimination rénale et extra-rénale et de la masse musculaire. Actuellement, pour estimer là filtration glomérulaire différentes formules sont utilisées, qui se fondent principalement sur la valeur de la créatinine. Néanmoins, en raison de la fraction éliminée par les voies tubulaires et intestinales la clairance de la créatinine surestime généralement le taux de filtration glomérulaire (GFR). Le but de cette étude est de vérifier la fiabilité de certains marqueurs et algorithmes de la fonction rénale actuellement utilisés et d'évaluer l'avantage additionnel de prendre en considération la masse musculaire mesurée par la bio-impédance dans une population âgée (> 70 ans) et avec une fonction rénale chronique compromise basée sur MDRD eGFR (CKD stades lll-IV). Dans cette étude, nous comparons 5 équations développées pour estimer la fonction rénale et basées respectivement sur la créatinine sérique (Cockcroft et MDRD), la cystatine C (Larsson), la créatinine combinée à la bêta-trace protéine (White), et la créatinine ajustée à la masse musculaire obtenue par analyse de la bio-impédance (MacDonald). La bio-impédance est une méthode couramment utilisée pour estimer la composition corporelle basée sur l'étude des propriétés électriques passives et de la géométrie des tissus biologiques. Cela permet d'estimer les volumes relatifs des différents tissus ou des fluides dans le corps, comme par exemple l'eau corporelle totale, la masse musculaire (=masse maigre) et la masse grasse corporelle. Nous avons évalué, dans une population âgée d'un service interne, et en utilisant la clairance de l'inuline (single shot) comme le « gold standard », les algorithmes de Cockcroft (GFR CKC), MDRD, Larsson (cystatine C, GFR CYS), White (beta trace protein, GFR BTP) et Macdonald (GFR = ALM, la masse musculaire par bio-impédance. Les résultats ont montré que le GFR (mean ± SD) mesurée avec l'inuline et calculée avec les algorithmes étaient respectivement de : 34.9±20 ml/min pour l'inuline, 46.7±18.5 ml/min pour CKC, 47.2±23 ml/min pour CYS, 54.4±18.2ml/min pour BTP, 49±15.9 ml/min pour MDRD et 32.9±27.2ml/min pour ALM. Les courbes ROC comparant la sensibilité et la spécificité, l'aire sous la courbe (AUC) et l'intervalle de confiance 95% étaient respectivement de : CKC 0 68 (055-0 81) MDRD 0.76 (0.64-0.87), Cystatin C 0.82 (0.72-0.92), BTP 0.75 (0.63-0.87), ALM 0.65 (0.52-0.78). ' En conclusion, les algorithmes comparés dans cette étude surestiment la GFR dans la population agee et hospitalisée, avec des polymorbidités et une classe CKD lll-IV. L'utilisation de l'impédance bioelectrique pour réduire l'erreur de l'estimation du GFR basé sur la créatinine n'a fourni aucune contribution significative, au contraire, elle a montré de moins bons résultats en comparaison aux autres equations. En fait dans cette étude 75% des patients ont changé leur classification CKD avec MacDonald (créatinine et masse musculaire), contre 49% avec CYS (cystatine C), 56% avec MDRD,52% avec Cockcroft et 65% avec BTP. Les meilleurs résultats ont été obtenus avec Larsson (CYS C) et la formule de Cockcroft.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resveratrol has been shown to have beneficial effects on diseases related to oxidant and/or inflammatory processes and extends the lifespan of simple organisms including rodents. The objective of the present study was to estimate the dietary intake of resveratrol and piceid (R&P) present in foods, and to identify the principal dietary sources of these compounds in the Spanish adult population. For this purpose, a food composition database (FCDB) of R&P in Spanish foods was compiled. The study included 40 685 subjects aged 3564 years from northern and southern regions of Spain who were included in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Spain cohort. Usual food intake was assessed by personal interviews using a computerised version of a validated diet history method. An FCDB with 160 items was compiled. The estimated median and mean of R&P intake were 100 and 933 mg/d respectively. Approximately, 32% of the population did not consume RΠ The most abundant of the four stilbenes studied was trans-piceid (53·6 %), followed by trans-resveratrol (20·9 %), cis-piceid (19·3 %) and cis-resveratrol (6·2 %). The most important source of R&P was wines (98·4 %) and grape and grape juices (1·6 %), whereas peanuts, pistachios and berries contributed to less than 0·01 %. For this reason the pattern of intake of R&P was similar to the wine pattern. This is the first time that R&P intake has been estimated in a Mediterranean country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on selvittää mitkä riskitekijät vaikuttavat osakkeiden tuottoihin. Arvopapereina käytetään kuutta portfoliota, jotka ovat jaoteltu markkina-arvon mukaan. Aikaperiodi on vuoden 1987 alusta vuoden 2004 loppuun. Malleina käytetään pääomamarkkinoiden hinnoittelumallia, arbitraasihinnoitteluteoriaa sekä kulutuspohjaista pääomamarkkinoiden hinnoittelumallia. Riskifaktoreina kahteen ensimmäiseen malliin käytetään markkinariskiä sekä makrotaloudellisia riskitekijöitä. Kulutuspohjaiseen pääomamarkkinoiden hinnoinoittelumallissa keskitytään estimoimaan kuluttajien riskitottumuksia sekä diskonttaustekijää, jolla kuluttaja arvostavat tulevaisuuden kulutusta. Tämä työ esittelee momenttiteorian, jolla pystymme estimoimaan lineaarisia sekä epälineaarisia yhtälöitä. Käytämme tätä menetelmää testaamissamme malleissa. Yhteenvetona tuloksista voidaan sanoa, että markkinabeeta onedelleen tärkein riskitekijä, mutta löydämme myös tukea makrotaloudellisille riskitekijöille. Kulutuspohjainen mallimme toimii melko hyvin antaen teoreettisesti hyväksyttäviä arvoja.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.