898 resultados para MOMENT ESTIMATION


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Allele frequencies at seven polymorphic loci controlling the synthesis of enzymes were analyzed in six populations of Culex pipiens L. and Cx. quinquefasciatus Say. Sampling sites were situated along a north-south line of about 2,000 km in Argentina. The predominant alleles at Mdh, Idh, Gpdh and Gpi loci presented similar frequencies in all the samples. Frequencies at the Pgm locus were similar for populations pairs sharing the same geographic area. The loci Cat and Hk-1 presented significant geographic variation. The latter showed a marked latitudinal cline, with a frequency for allele b ranging from 0.99 in the northernmost point to 0.04 in the southernmost one, a pattern that may be explained by natural selection (FST = 0.46; p < 0.0001) on heat sensitive alleles. The average value of FST (0.088) and Nm (61.12) indicated a high gene flow between adjacent populations. A high correlation was found between genetic and geographic distance (r = 0.83; p < 0.001). The highest genetic identity (IN = 0.988) corresponded to the geographically closest samples from the central area. In one of these localities Cx. quinquefasciatus was predominant and hybrid individuals were detected, while in the other, almost all the specimens were identified as Cx. pipiens. To verify the fertility between Cx. pipiens and Cx. quinquefasciatus from the northern- and southernmost populations, experimental crosses were performed. Viable egg rafts were obtained from both reciprocal crosses. Hatching ranged from 76.5 to 100%. The hybrid progenies were fertile through two subsequent generations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction : la Sclérose en plaques (SEP) est le prototype de désordre auto-immun du système nerveux central. Avec environ 110 malades par 100'000 habitants, la Suisse est considérée un pays à haute prévalence. Chez environ 80% des patients, la maladie débute par la forme récurrente- rémittente (RR), où des poussées aiguës s'intercalent avec des périodes de rémission. Cette phase se conclut dans son évolution naturelle généralement en une phase secondairement progressive, pendant laquelle le déficit progresse en l'absence de poussée. Sur le plan physiopathologique, deux phénomènes interagissent : l'atteinte inflammatoire démyélinisante et l'atteinte neurodégénerative. La première est { l'origine des poussées aiguës, la deuxième se manifeste cliniquement par la progression irréversible du déficit neurologique. En Suisse les immunomodulateurs ont été utilisés comme thérapies de fond pour la SEP à partir des années 1995. Leur effet sur le taux de poussées a été largement démontré, tandis que leur efficacité sur l'évolution de la maladie à long terme reste ouverte. Le moyen le plus répandu pour quantifier le niveau du handicap neurologique est la Kurtzke Expanded Disability Status Scale (EDSS). Cette échelle évalue les troubles neurologiques en les classifiant de 0 (examen normal) à 10 (décès) avec des marches de demi-points. Notre recherche à voulu identifier des facteurs cliniques précoces { valeur prédictif sur l'évolution du déficit neurologique permanent, ainsi qu'analyser le moment d'introduction du traitement pour extraire des informations utiles { la décision thérapeutique. Méthodes : Exploitation de la base de données iMed-CHUV comptant 1150 patients SEP (dont 622 SEP RR) pour analyser rétrospectivement, dans la SEP RR, l'influence de différentes variables cliniques précoces (taux de poussées pendant les premières deux années de maladie, intervalle entre les deux premières poussées, sévérité et site anatomique de la première poussée, déficit résiduel après la première poussée) et de deux caractéristiques liées { l'instauration du traitement immunosuppresseur de fond (âge et délai d'introduction) sur l'évolution du déficit neurologique vers un score EDSS ≥4.0. Les variables ont été testées avec la méthode d'estimation de taux de survie Kaplan-Meier. Résultats: 349 patients avec SEP RR possédaient les critères nécessaires pour faire partie de l'analyse, le suivi moyen étant de 8.26 ans (SD 4.77). Un taux de poussées élevé pendant les premiers 2 ans (>1 vs ≤1) et un long intervalle entre les 2 premiers épisodes (>36 vs >12-36 vs ≤12) étaient significativement associés au risque de progression du déficit neurologique vers un score EDSS de 4.0 ou plus (log Rank P=0.016 et P=0.008 respectivement). Par contre ni le site anatomique de la première poussée ni l'âge d'introduction du traitement immunomodulateur n'avaient d'influence significative sur la progression du déficit neurologique (log rank P=0.370 et P=0.945 respectivement). Etonnamment une introduction rapide du traitement était associée à une plus forte progression du déficit neurologique (log rank P=0.032), montrant qu'une partie des patients a une évolution bénigne même en l'absence de traitement. Conclusions : L'activité inflammatoire précoce, dont le niveau peut être estimé par indices précoces comme le taux de poussées et l'intervalle entre les deux premières poussées, mais non le site de primo-manifestation prédit la progression ultérieure du déficit neurologique. Ces indices doivent être utilisés en combinaison avec les informations fournies par l'IRM pour l'individuation et le traitement précoce des patients à risque, indépendamment de leur âge. En raison des effets indésirables et des coûts élevés, les thérapies doivent cibler de façon spécifique les classes à risque, et épargner les patients avec évolution lente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QUESTIONS UNDER STUDY AND PRINCIPLES: Estimating glomerular filtration rate (GFR) in hospitalised patients with chronic kidney disease (CKD) is important for drug prescription but it remains a difficult task. The purpose of this study was to investigate the reliability of selected algorithms based on serum creatinine, cystatin C and beta-trace protein to estimate GFR and the potential added advantage of measuring muscle mass by bioimpedance. In a prospective unselected group of patients hospitalised in a general internal medicine ward with CKD, GFR was evaluated using inulin clearance as the gold standard and the algorithms of Cockcroft, MDRD, Larsson (cystatin C), White (beta-trace) and MacDonald (creatinine and muscle mass by bioimpedance). 69 patients were included in the study. Median age (interquartile range) was 80 years (73-83); weight 74.7 kg (67.0-85.6), appendicular lean mass 19.1 kg (14.9-22.3), serum creatinine 126 μmol/l (100-149), cystatin C 1.45 mg/l (1.19-1.90), beta-trace protein 1.17 mg/l (0.99-1.53) and GFR measured by inulin 30.9 ml/min (22.0-43.3). The errors in the estimation of GFR and the area under the ROC curves (95% confidence interval) relative to inulin were respectively: Cockcroft 14.3 ml/min (5.55-23.2) and 0.68 (0.55-0.81), MDRD 16.3 ml/min (6.4-27.5) and 0.76 (0.64-0.87), Larsson 12.8 ml/min (4.50-25.3) and 0.82 (0.72-0.92), White 17.6 ml/min (11.5-31.5) and 0.75 (0.63-0.87), MacDonald 32.2 ml/min (13.9-45.4) and 0.65 (0.52-0.78). Currently used algorithms overestimate GFR in hospitalised patients with CKD. As a consequence eGFR targeted prescriptions of renal-cleared drugs, might expose patients to overdosing. The best results were obtained with the Larsson algorithm. The determination of muscle mass by bioimpedance did not provide significant contributions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Estimated glomerular filtration rate (eGFR) is an important diagnostic instrument in clinical practice. The National Kidney Foundation-Kidney Disease Quality Initiative (NKF-KDOQI) guidelines do not recommend using formulas developed for adults to estimate GFR in children; however, studies confirming these recommendations are scarce. The aim of our study was to evaluate the accuracy of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) formula, the Modification of Diet in Renal Disease (MDRD) formula, and the Cockcroft-Gault formula in children with various stages of chronic kidney disease (CKD). METHODS: A total of 550 inulin clearance (iGFR) measurements for 391 children were analyzed. The cohort was divided into three groups: group 1, with iGFR >90 ml/min/1.73 m(2); group 2, with iGFR between 60 and 90 ml/min/1.73 m(2); group 3, with iGFR of <60 ml/min/1.73 m(2). RESULTS: All formulas overestimate iGFR with a significant bias (p < 0.001), present poor accuracies, and have poor Spearman correlations. For an accuracy of 10 %, only 11, 6, and 27 % of the eGFRs are accurate when using the MDRD, CKD-EPI, and Cockcroft-Gault formulas, respectively. For an accuracy of 30 %, these formulas do not reach the NKF-KDOQI guidelines for validation, with only 25, 20, and 70 % of the eGFRs, respectively, being accurate. CONCLUSIONS: Based on our results, the performances of all of these formulas are unreliable for eGFR in children across all CKD stages and cannot therefore be applied in the pediatric population group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone micro-architecture based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. Method: The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goals of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. Results: We included 631 women: mean age 67.4±6.7 y, BMI 26.1±4.6, mean lumbar spine BMD 0.943±0.168 (T-score -1.4 SD), TBS 1.271±0.103. As expected, correlation between BMD and site matched TBS is low (r2=0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2- 2.5), 1.6 (1.2-2.1), 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < -2.5 SD or a TBS < 1.200. If we combine a BMD < -2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been miss-classified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS & HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis focuses on theoretical asset pricing models and their empirical applications. I aim to investigate the following noteworthy problems: i) if the relationship between asset prices and investors' propensities to gamble and to fear disaster is time varying, ii) if the conflicting evidence for the firm and market level skewness can be explained by downside risk, Hi) if costly learning drives liquidity risk. Moreover, empirical tests support the above assumptions and provide novel findings in asset pricing, investment decisions, and firms' funding liquidity. The first chapter considers a partial equilibrium model where investors have heterogeneous propensities to gamble and fear disaster. Skewness preference represents the desire to gamble, while kurtosis aversion represents fear of extreme returns. Using US data from 1988 to 2012, my model demonstrates that in bad times, risk aversion is higher, more people fear disaster, and fewer people gamble, in contrast to good times. This leads to a new empirical finding: gambling preference has a greater impact on asset prices during market downturns than during booms. The second chapter consists of two essays. The first essay introduces a foramula based on conditional CAPM for decomposing the market skewness. We find that the major market upward and downward movements can be well preadicted by the asymmetric comovement of betas, which is characterized by an indicator called "Systematic Downside Risk" (SDR). We find that SDR can efafectively forecast future stock market movements and we obtain out-of-sample R-squares (compared with a strategy using historical mean) of more than 2.27% with monthly data. The second essay reconciles a well-known empirical fact: aggregating positively skewed firm returns leads to negatively skewed market return. We reconcile this fact through firms' greater response to negative maraket news than positive market news. We also propose several market return predictors, such as downside idiosyncratic skewness. The third chapter studies the funding liquidity risk based on a general equialibrium model which features two agents: one entrepreneur and one external investor. Only the investor needs to acquire information to estimate the unobservable fundamentals driving the economic outputs. The novelty is that information acquisition is more costly in bad times than in good times, i.e. counter-cyclical information cost, as supported by previous empirical evidence. Later we show that liquidity risks are principally driven by costly learning. Résumé Cette thèse présente des modèles théoriques dévaluation des actifs et leurs applications empiriques. Mon objectif est d'étudier les problèmes suivants: la relation entre l'évaluation des actifs et les tendances des investisseurs à parier et à crainadre le désastre varie selon le temps ; les indications contraires pour l'entreprise et l'asymétrie des niveaux de marché peuvent être expliquées par les risques de perte en cas de baisse; l'apprentissage coûteux augmente le risque de liquidité. En outre, des tests empiriques confirment les suppositions ci-dessus et fournissent de nouvelles découvertes en ce qui concerne l'évaluation des actifs, les décisions relatives aux investissements et la liquidité de financement des entreprises. Le premier chapitre examine un modèle d'équilibre où les investisseurs ont des tendances hétérogènes à parier et à craindre le désastre. La préférence asymétrique représente le désir de parier, alors que le kurtosis d'aversion représente la crainte du désastre. En utilisant les données des Etats-Unis de 1988 à 2012, mon modèle démontre que dans les mauvaises périodes, l'aversion du risque est plus grande, plus de gens craignent le désastre et moins de gens parient, conatrairement aux bonnes périodes. Ceci mène à une nouvelle découverte empirique: la préférence relative au pari a un plus grand impact sur les évaluations des actifs durant les ralentissements de marché que durant les booms économiques. Exploitant uniquement cette relation générera un revenu excédentaire annuel de 7,74% qui n'est pas expliqué par les modèles factoriels populaires. Le second chapitre comprend deux essais. Le premier essai introduit une foramule base sur le CAPM conditionnel pour décomposer l'asymétrie du marché. Nous avons découvert que les mouvements de hausses et de baisses majeures du marché peuvent être prédits par les mouvements communs des bêtas. Un inadicateur appelé Systematic Downside Risk, SDR (risque de ralentissement systématique) est créé pour caractériser cette asymétrie dans les mouvements communs des bêtas. Nous avons découvert que le risque de ralentissement systématique peut prévoir les prochains mouvements des marchés boursiers de manière efficace, et nous obtenons des carrés R hors échantillon (comparés avec une stratégie utilisant des moyens historiques) de plus de 2,272% avec des données mensuelles. Un investisseur qui évalue le marché en utilisant le risque de ralentissement systématique aurait obtenu une forte hausse du ratio de 0,206. Le second essai fait cadrer un fait empirique bien connu dans l'asymétrie des niveaux de march et d'entreprise, le total des revenus des entreprises positiveament asymétriques conduit à un revenu de marché négativement asymétrique. Nous décomposons l'asymétrie des revenus du marché au niveau de l'entreprise et faisons cadrer ce fait par une plus grande réaction des entreprises aux nouvelles négatives du marché qu'aux nouvelles positives du marché. Cette décomposition révélé plusieurs variables de revenus de marché efficaces tels que l'asymétrie caractéristique pondérée par la volatilité ainsi que l'asymétrie caractéristique de ralentissement. Le troisième chapitre fournit une nouvelle base théorique pour les problèmes de liquidité qui varient selon le temps au sein d'un environnement de marché incomplet. Nous proposons un modèle d'équilibre général avec deux agents: un entrepreneur et un investisseur externe. Seul l'investisseur a besoin de connaitre le véritable état de l'entreprise, par conséquent, les informations de paiement coutent de l'argent. La nouveauté est que l'acquisition de l'information coute plus cher durant les mauvaises périodes que durant les bonnes périodes, comme cela a été confirmé par de précédentes expériences. Lorsque la récession comamence, l'apprentissage coûteux fait augmenter les primes de liquidité causant un problème d'évaporation de liquidité, comme cela a été aussi confirmé par de précédentes expériences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of malaria transmission in the Pacific Coast of Colombia during the occurrence of El Niño warm event has been found not to be linked to increases in the density of the vector Anopheles albimanus, but to other temperature-sensitive variables such as longevity, duration of the gonotrophic cycle or the sporogonic period of Plasmodium. The present study estimated the effects of temperature on duration of the gonotrophic cycle and on maturation of the ovaries of An. albimanus. Blood fed adult mosquitoes were exposed to temperatures of 24, 27, and 30°C, held individually in oviposition cages and assessed at 12 h intervals. At 24, 27, and 30°C the mean development time of the oocytes was 91.2 h (95% C.I.: 86.5-96), 66.2 h (61.5-70.8), and 73.1 h (64-82.3), respectively. The mean duration of the gonotrophic cycle for these three temperatures was 88.4 h (81.88-94.9), 75 h (71.4-78.7), and 69.1 h (64.6-73.6) respectively. These findings indicate that both parameters in An. albimanus are reduced when temperatures rose from 24 to 30°C, in a nonlinear manner. According to these results the increase in malaria transmission during El Niño in Colombia could be associated with a shortening of the gonotrophic cycle in malaria vectors, which could enhance the frequency of man-vector contact, affecting the incidence of the disease.