884 resultados para effort estimation
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
Allele frequencies at seven polymorphic loci controlling the synthesis of enzymes were analyzed in six populations of Culex pipiens L. and Cx. quinquefasciatus Say. Sampling sites were situated along a north-south line of about 2,000 km in Argentina. The predominant alleles at Mdh, Idh, Gpdh and Gpi loci presented similar frequencies in all the samples. Frequencies at the Pgm locus were similar for populations pairs sharing the same geographic area. The loci Cat and Hk-1 presented significant geographic variation. The latter showed a marked latitudinal cline, with a frequency for allele b ranging from 0.99 in the northernmost point to 0.04 in the southernmost one, a pattern that may be explained by natural selection (FST = 0.46; p < 0.0001) on heat sensitive alleles. The average value of FST (0.088) and Nm (61.12) indicated a high gene flow between adjacent populations. A high correlation was found between genetic and geographic distance (r = 0.83; p < 0.001). The highest genetic identity (IN = 0.988) corresponded to the geographically closest samples from the central area. In one of these localities Cx. quinquefasciatus was predominant and hybrid individuals were detected, while in the other, almost all the specimens were identified as Cx. pipiens. To verify the fertility between Cx. pipiens and Cx. quinquefasciatus from the northern- and southernmost populations, experimental crosses were performed. Viable egg rafts were obtained from both reciprocal crosses. Hatching ranged from 76.5 to 100%. The hybrid progenies were fertile through two subsequent generations
Resumo:
The effectiveness of R&D subsidies can vary substantially depending on their characteristics. Specifically, the amount and intensity of such subsidies are crucial issues in the design of public schemes supporting private R&D. Public agencies determine the intensities of R&D subsidies for firms in line with their eligibility criteria, although assessing the effects of R&D projects accurately is far from straightforward. The main aim of this paper is to examine whether there is an optimal intensity for R&D subsidies through an analysis of their impact on private R&D effort. We examine the decisions of a public agency to grant subsidies taking into account not only the characteristics of the firms but also, as few previous studies have done to date, those of the R&D projects. In determining the optimal subsidy we use both parametric and nonparametric techniques. The results show a non-linear relationship between the percentage of subsidy received and the firms’ R&D effort. These results have implications for technology policy, particularly for the design of R&D subsidies that ensure enhanced effectiveness.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.
Resumo:
QUESTIONS UNDER STUDY AND PRINCIPLES: Estimating glomerular filtration rate (GFR) in hospitalised patients with chronic kidney disease (CKD) is important for drug prescription but it remains a difficult task. The purpose of this study was to investigate the reliability of selected algorithms based on serum creatinine, cystatin C and beta-trace protein to estimate GFR and the potential added advantage of measuring muscle mass by bioimpedance. In a prospective unselected group of patients hospitalised in a general internal medicine ward with CKD, GFR was evaluated using inulin clearance as the gold standard and the algorithms of Cockcroft, MDRD, Larsson (cystatin C), White (beta-trace) and MacDonald (creatinine and muscle mass by bioimpedance). 69 patients were included in the study. Median age (interquartile range) was 80 years (73-83); weight 74.7 kg (67.0-85.6), appendicular lean mass 19.1 kg (14.9-22.3), serum creatinine 126 μmol/l (100-149), cystatin C 1.45 mg/l (1.19-1.90), beta-trace protein 1.17 mg/l (0.99-1.53) and GFR measured by inulin 30.9 ml/min (22.0-43.3). The errors in the estimation of GFR and the area under the ROC curves (95% confidence interval) relative to inulin were respectively: Cockcroft 14.3 ml/min (5.55-23.2) and 0.68 (0.55-0.81), MDRD 16.3 ml/min (6.4-27.5) and 0.76 (0.64-0.87), Larsson 12.8 ml/min (4.50-25.3) and 0.82 (0.72-0.92), White 17.6 ml/min (11.5-31.5) and 0.75 (0.63-0.87), MacDonald 32.2 ml/min (13.9-45.4) and 0.65 (0.52-0.78). Currently used algorithms overestimate GFR in hospitalised patients with CKD. As a consequence eGFR targeted prescriptions of renal-cleared drugs, might expose patients to overdosing. The best results were obtained with the Larsson algorithm. The determination of muscle mass by bioimpedance did not provide significant contributions.
Resumo:
This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.
Resumo:
BACKGROUND: Estimated glomerular filtration rate (eGFR) is an important diagnostic instrument in clinical practice. The National Kidney Foundation-Kidney Disease Quality Initiative (NKF-KDOQI) guidelines do not recommend using formulas developed for adults to estimate GFR in children; however, studies confirming these recommendations are scarce. The aim of our study was to evaluate the accuracy of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) formula, the Modification of Diet in Renal Disease (MDRD) formula, and the Cockcroft-Gault formula in children with various stages of chronic kidney disease (CKD). METHODS: A total of 550 inulin clearance (iGFR) measurements for 391 children were analyzed. The cohort was divided into three groups: group 1, with iGFR >90 ml/min/1.73 m(2); group 2, with iGFR between 60 and 90 ml/min/1.73 m(2); group 3, with iGFR of <60 ml/min/1.73 m(2). RESULTS: All formulas overestimate iGFR with a significant bias (p < 0.001), present poor accuracies, and have poor Spearman correlations. For an accuracy of 10 %, only 11, 6, and 27 % of the eGFRs are accurate when using the MDRD, CKD-EPI, and Cockcroft-Gault formulas, respectively. For an accuracy of 30 %, these formulas do not reach the NKF-KDOQI guidelines for validation, with only 25, 20, and 70 % of the eGFRs, respectively, being accurate. CONCLUSIONS: Based on our results, the performances of all of these formulas are unreliable for eGFR in children across all CKD stages and cannot therefore be applied in the pediatric population group.
Resumo:
Introduction: Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone micro-architecture based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. Method: The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goals of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. Results: We included 631 women: mean age 67.4±6.7 y, BMI 26.1±4.6, mean lumbar spine BMD 0.943±0.168 (T-score -1.4 SD), TBS 1.271±0.103. As expected, correlation between BMD and site matched TBS is low (r2=0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2- 2.5), 1.6 (1.2-2.1), 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < -2.5 SD or a TBS < 1.200. If we combine a BMD < -2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been miss-classified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS & HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.
Resumo:
INTRODUCTION. The role of turbine-based NIV ventilators (TBV) versus ICU ventilators with NIV mode activated (ICUV) to deliver NIV in case of severe respiratory failure remains debated. OBJECTIVES. To compare the response time and pressurization capacity of TBV and ICUV during simulated NIV with normal and increased respiratory demand, in condition of normal and obstructive respiratory mechanics. METHODS. In a two-chamber lung model, a ventilator simulated normal (P0.1 = 2 mbar, respiratory rate RR = 15/min) or increased (P0.1 = 6 mbar, RR = 25/min) respiratory demand. NIV was simulated by connecting the lung model (compliance 100 ml/mbar; resistance 5 or 20 l/mbar) to a dummy head equipped with a naso-buccal mask. Connections allowed intentional leaks (29 ± 5 % of insufflated volume). Ventilators to test: Servo-i (Maquet), V60 and Vision (Philips Respironics) were connected via a standard circuit to the mask. Applied pressure support levels (PSL) were 7 mbar for normal and 14 mbar for increased demand. Airway pressure and flow were measured in the ventilator circuit and in the simulated airway. Ventilator performance was assessed by determining trigger delay (Td, ms), pressure time product at 300 ms (PTP300, mbar s) and inspiratory tidal volume (VT, ml) and compared by three-way ANOVA for the effect of inspiratory effort, resistance and the ventilator. Differences between ventilators for each condition were tested by oneway ANOVA and contrast (JMP 8.0.1, p\0.05). RESULTS. Inspiratory demand and resistance had a significant effect throughout all comparisons. Ventilator data figure in Table 1 (normal demand) and 2 (increased demand): (a) different from Servo-i, (b) different from V60.CONCLUSION. In this NIV bench study, with leaks, trigger delay was shorter for TBV with normal respiratory demand. By contrast, it was shorter for ICUV when respiratory demand was high. ICUV afforded better pressurization (PTP 300) with increased demand and PSL, particularly with increased resistance. TBV provided a higher inspiratory VT (i.e., downstream from the leaks) with normal demand, and a significantly (although minimally) lower VT with increased demand and PSL.
Resumo:
En aquest document s'introdueixen els conceptes bàsics necessaris per a l'execució de mètriques de productivitat de programari. Després de la introducció, s'estudien amb detall les mètriques de productivitat més emprades actualment, que són línies de codi (mètrica orientada a les dimensions del projecte), punts de funció (orientada a la funcionalitat del projecte, específica per a projectes de gestió), punts de característica (semblant a punts de funció, però més genèrica i útil per a altres tipus de projectes) i punts de casos d'ús (també orientada a la funció i específica per a projectes d'orientació a objectes). S'hi explica com es pot aconseguir, a partir d'aquestes mètriques i amb l'ajut de models d'estimació de productivitat, com ara el model COCOMO II, les estimacions de l'esforç necessari per a desenvolupar un projecte de programari i la distribució de l'esforç en totes les etapes del projecte a partir de les estimacions de la fase de desenvolupament. També es tracta, encara que no amb tanta profunditat, de la mètrica