24 resultados para Method of moments algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The physical disector is a method of choice for estimating unbiased neuron numbers; nevertheless, calibration is needed to evaluate each counting method. The validity of this method can be assessed by comparing the estimated cell number with the true number determined by a direct counting method in serial sections. We reconstructed a 1/5 of rat lumbar dorsal root ganglia taken from two experimental conditions. From each ganglion, images of 200 adjacent semi-thin sections were used to reconstruct a volumetric dataset (stack of voxels). On these stacks the number of sensory neurons was estimated and counted respectively by physical disector and direct counting methods. Also, using the coordinates of nuclei from the direct counting, we simulate, by a Matlab program, disector pairs separated by increasing distances in a ganglion model. The comparison between the results of these approaches clearly demonstrates that the physical disector method provides a valid and reliable estimate of the number of sensory neurons only when the distance between the consecutive disector pairs is 60 microm or smaller. In these conditions the size of error between the results of physical disector and direct counting does not exceed 6%. In contrast when the distance between two pairs is larger than 60 microm (70-200 microm) the size of error increases rapidly to 27%. We conclude that the physical dissector method provides a reliable estimate of the number of rat sensory neurons only when the separating distance between the consecutive dissector pairs is no larger than 60 microm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the molecular typing of Pseudomonas aeruginosa is important to understand the local epidemiology of this opportunistic pathogen, it remains challenging. Our aim was to develop a simple typing method based on the sequencing of two highly variable loci. Single-strand sequencing of three highly variable loci (ms172, ms217, and oprD) was performed on a collection of 282 isolates recovered between 1994 and 2007 (from patients and the environment). As expected, the resolution of each locus alone [number of types (NT) = 35-64; index of discrimination (ID) = 0.816-0.964] was lower than the combination of two loci (NT = 78-97; ID = 0.966-0.971). As each pairwise combination of loci gave similar results, we selected the most robust combination with ms172 [reverse; R] and ms217 [R] to constitute the double-locus sequence typing (DLST) scheme for P. aeruginosa. This combination gave: (i) a complete genotype for 276/282 isolates (typability of 98%), (ii) 86 different types, and (iii) an ID of 0.968. Analysis of multiple isolates from the same patients or taps showed that DLST genotypes are generally stable over a period of several months. The high typability, discriminatory power, and ease of use of the proposed DLST scheme makes it a method of choice for local epidemiological analyses of P. aeruginosa. Moreover, the possibility to give unambiguous definition of types allowed to develop an Internet database ( http://www.dlst.org ) accessible by all.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: The aim of the current study was to investigate the biomechanical stability and fixation strength provided by a posterior approach reconstruction technique to realign the craniovertebral junction.¦METHODS: We tested seven human cadaver occipito-cervical spines (occiput-C4) by applying pure moments of ±1.5 Nm on a spine tester. Each specimen was tested in the following modes: 1) intact; 2) injured; 3) spacers alone at C1-C2 articulation (S); 4) spacers plus C1-C2 Posterior Instrumentation (S+PI); and 5) spacers plus C1-C2 posterior instrumentation plus midline wiring (S+PI+MLW). C1-C2 range of motion for each construct was obtained in flexion-extension, lateral bending, and axial rotation.¦RESULTS: In all the loading modes, S, S+PI, and S+PI+MLW constructs significantly reduced range of motion compared with the intact and injured condition (P < 0.05). There was no statistical difference between any of the three instrumentation constructs (P > 0.05).¦CONCLUSIONS: This study investigated the biomechanics of the posterior approach technique for realignment of the craniovertebral junction and also made comparisons with additional posterior fixations. The stand-alone spacers were stable in all three loading modes. Posterior instrumentation increased the stability as compared to stand-alone spacers. The third point of fixation, carried out by using midline wiring, increased the stability further. However, there was not much difference in the stability imparted with the midline wiring versus without. The present study highlights the biomechanics of this novel concept and reaffirms the view that distraction of the C1-C2 articular facets and direct articular joint atlantoaxial fixation would be an ideal method of management of basilar invagination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Building on the instrumental model of group conflict (IMGC), the present experiment investigates the support for discriminatory and meritocratic method of selections at university in a sample of local and immigrant students. Results showed that local students were supporting in a larger proportion selection method that favors them over immigrants in comparison to method that consists in selecting the best applicants without considering his/her origin. Supporting the assumption of the IMGC, this effect was stronger for locals who perceived immigrants as competing for resources. Immigrant students supported more strongly the meritocratic selection method than the one that discriminated them. However, contrasting with the assumption of the IMGC, this effect was only present in students who perceived immigrants as weakly competing for locals' resources. Results demonstrate that selection methods used at university can be perceived differently depending on students' origin. Further, they suggest that the mechanisms underlying the perception of discriminatory and meritocratic selection methods differ between local and immigrant students. Hence, the present experiment makes a theoretical contribution to the IMGC by delimiting its assumptions to the ingroup facing a competitive situation with a relevant outgroup. Practical implication for universities recruitment policies are discussed.