990 resultados para POISSON-DISTRIBUTION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aim This study compares the direct, macroecological approach (MEM) for modelling species richness (SR) with the more recent approach of stacking predictions from individual species distributions (S-SDM). We implemented both approaches on the same dataset and discuss their respective theoretical assumptions, strengths and drawbacks. We also tested how both approaches performed in reproducing observed patterns of SR along an elevational gradient.Location Two study areas in the Alps of Switzerland.Methods We implemented MEM by relating the species counts to environmental predictors with statistical models, assuming a Poisson distribution. S-SDM was implemented by modelling each species distribution individually and then stacking the obtained prediction maps in three different ways - summing binary predictions, summing random draws of binomial trials and summing predicted probabilities - to obtain a final species count.Results The direct MEM approach yields nearly unbiased predictions centred around the observed mean values, but with a lower correlation between predictions and observations, than that achieved by the S-SDM approaches. This method also cannot provide any information on species identity and, thus, community composition. It does, however, accurately reproduce the hump-shaped pattern of SR observed along the elevational gradient. The S-SDM approach summing binary maps can predict individual species and thus communities, but tends to overpredict SR. The two other S-SDM approaches the summed binomial trials based on predicted probabilities and summed predicted probabilities - do not overpredict richness, but they predict many competing end points of assembly or they lose the individual species predictions, respectively. Furthermore, all S-SDM approaches fail to appropriately reproduce the observed hump-shaped patterns of SR along the elevational gradient.Main conclusions Macroecological approach and S-SDM have complementary strengths. We suggest that both could be used in combination to obtain better SR predictions by following the suggestion of constraining S-SDM by MEM predictions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND The effect of the macronutrient composition of the usual diet on long term weight maintenance remains controversial. METHODS 373,803 subjects aged 25-70 years were recruited in 10 European countries (1992-2000) in the PANACEA project of the EPIC cohort. Diet was assessed at baseline using country-specific validated questionnaires and weight and height were measured at baseline and self-reported at follow-up in most centers. The association between weight change after 5 years of follow-up and the iso-energetic replacement of 5% of energy from one macronutrient by 5% of energy from another macronutrient was assessed using multivariate linear mixed-models. The risk of becoming overweight or obese after 5 years was investigated using multivariate Poisson regressions stratified according to initial Body Mass Index. RESULTS A higher proportion of energy from fat at the expense of carbohydrates was not significantly associated with weight change after 5 years. However, a higher proportion of energy from protein at the expense of fat was positively associated with weight gain. A higher proportion of energy from protein at the expense of carbohydrates was also positively associated with weight gain, especially when carbohydrates were rich in fibre. The association between percentage of energy from protein and weight change was slightly stronger in overweight participants, former smokers, participants ≥60 years old, participants underreporting their energy intake and participants with a prudent dietary pattern. Compared to diets with no more than 14% of energy from protein, diets with more than 22% of energy from protein were associated with a 23-24% higher risk of becoming overweight or obese in normal weight and overweight subjects at baseline. CONCLUSION Our results show that participants consuming an amount of protein above the protein intake recommended by the American Diabetes Association may experience a higher risk of becoming overweight or obese during adult life.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this work is to make known the multicentric project AMCAC, whose objective is to describe the geographical distribution of mortality from all causes in census groups of the provincial capitals of Andalusia and Catalonia during 1992-2002 and 1994-2000 respectively, and to study the relationship between the sociodemographic characteristics of the census groups and mortality. This is an ecological study in which the analytical unit is the census group. The data correspond to 298,731 individuals (152,913 men and 145,818 women) who died during the study periods in the towns of Almeria, Barcelona, Cadiz, Cordoba, Girona, Granada, Huelva, Jaen, Lleida, Malaga, Seville and Tarragona during the study periods. The dependent variable is the number of deaths observed per census group. The independent variables are the percentage of unemployment, illiteracy and manual workers. Estimation of the moderated relative risk and the study of the associations among the sociodemographic characteristics of the census groups and the mortality will be done for each town and each sex using the Besag-York-Mollie model. Dissemination of the results will help to improve and broaden knowledge about the population's health, and will provide an important starting point to establish the influence of contextual variables on the health of urban populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We include solvation effects in tight-binding Hamiltonians for hole states in DNA. The corresponding linear-response parameters are derived from accurate estimates of solvation energy calculated for several hole charge distributions in DNA stacks. Two models are considered: (A) the correction to a diagonal Hamiltonian matrix element depends only on the charge localized on the corresponding site and (B) in addition to this term, the reaction field due to adjacent base pairs is accounted for. We show that both schemes give very similar results. The effects of the polar medium on the hole distribution in DNA are studied. We conclude that the effects of polar surroundings essentially suppress charge delocalization in DNA, and hole states in (GC)n sequences are localized on individual guanines

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In order to investigate the determinants of effective population size in the socially monogamous Crocidura russula, the reproductive output of 44 individuals was estimated through genetic assignment methods. The individual variance in breeding success turned out to be surprisingly high, mostly because the males were markedly less monogamous than expected from previous behavioural data. Males paired simultaneously with up to four females and polygynous males had significantly more offspring than monogamous ones. The variance in female reproductive success also exceeded that of a Poisson distribution (though to a lesser extent), partly because females paired with multiply mated males weaned significantly more offspring. Polyandry also occurred occasionally, but only sequentially (i.e. without multiple paternity of litters). Estimates of the effective to census size ratio were ca. 0.60, which excluded the mating system as a potential explanation for the high genetic variance found in this shrew's populations. Our data suggest that gene flow from the neighbourhood (up to one-third of the total recruitment) is the most likely cause of the high levels of genetic diversity observed in this shrew's subpopulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: The hospital readmission rate has been proposed as an important outcome indicator computable from routine statistics. However, most commonly used measures raise conceptual issues. OBJECTIVES: We sought to evaluate the usefulness of the computerized algorithm for identifying avoidable readmissions on the basis of minimum bias, criterion validity, and measurement precision. RESEARCH DESIGN AND SUBJECTS: A total of 131,809 hospitalizations of patients discharged alive from 49 hospitals were used to compare the predictive performance of risk adjustment methods. A subset of a random sample of 570 medical records of discharge/readmission pairs in 12 hospitals were reviewed to estimate the predictive value of the screening of potentially avoidable readmissions. MEASURES: Potentially avoidable readmissions, defined as readmissions related to a condition of the previous hospitalization and not expected as part of a program of care and occurring within 30 days after the previous discharge, were identified by a computerized algorithm. Unavoidable readmissions were considered as censored events. RESULTS: A total of 5.2% of hospitalizations were followed by a potentially avoidable readmission, 17% of them in a different hospital. The predictive value of the screen was 78%; 27% of screened readmissions were judged clearly avoidable. The correlation between the hospital rate of clearly avoidable readmission and all readmissions rate, potentially avoidable readmissions rate or the ratio of observed to expected readmissions were respectively 0.42, 0.56 and 0.66. Adjustment models using clinical information performed better. CONCLUSION: Adjusted rates of potentially avoidable readmissions are scientifically sound enough to warrant their inclusion in hospital quality surveillance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We analyze crash data collected by the Iowa Department of Transportation using Bayesian methods. The data set includes monthly crash numbers, estimated monthly traffic volumes, site length and other information collected at 30 paired sites in Iowa over more than 20 years during which an intervention experiment was set up. The intervention consisted in transforming 15 undivided road segments from four-lane to three lanes, while an additional 15 segments, thought to be comparable in terms of traffic safety-related characteristics were not converted. The main objective of this work is to find out whether the intervention reduces the number of crashes and the crash rates at the treated sites. We fitted a hierarchical Poisson regression model with a change-point to the number of monthly crashes per mile at each of the sites. Explanatory variables in the model included estimated monthly traffic volume, time, an indicator for intervention reflecting whether the site was a “treatment” or a “control” site, and various interactions. We accounted for seasonal effects in the number of crashes at a site by including smooth trigonometric functions with three different periods to reflect the four seasons of the year. A change-point at the month and year in which the intervention was completed for treated sites was also included. The number of crashes at a site can be thought to follow a Poisson distribution. To estimate the association between crashes and the explanatory variables, we used a log link function and added a random effect to account for overdispersion and for autocorrelation among observations obtained at the same site. We used proper but non-informative priors for all parameters in the model, and carried out all calculations using Markov chain Monte Carlo methods implemented in WinBUGS. We evaluated the effect of the four to three-lane conversion by comparing the expected number of crashes per year per mile during the years preceding the conversion and following the conversion for treatment and control sites. We estimated this difference using the observed traffic volumes at each site and also on a per 100,000,000 vehicles. We also conducted a prospective analysis to forecast the expected number of crashes per mile at each site in the study one year, three years and five years following the four to three-lane conversion. Posterior predictive distributions of the number of crashes, the crash rate and the percent reduction in crashes per mile were obtained for each site for the months of January and June one, three and five years after completion of the intervention. The model appears to fit the data well. We found that in most sites, the intervention was effective and reduced the number of crashes. Overall, and for the observed traffic volumes, the reduction in the expected number of crashes per year and mile at converted sites was 32.3% (31.4% to 33.5% with 95% probability) while at the control sites, the reduction was estimated to be 7.1% (5.7% to 8.2% with 95% probability). When the reduction in the expected number of crashes per year, mile and 100,000,000 AADT was computed, the estimates were 44.3% (43.9% to 44.6%) and 25.5% (24.6% to 26.0%) for converted and control sites, respectively. In both cases, the difference in the percent reduction in the expected number of crashes during the years following the conversion was significantly larger at converted sites than at control sites, even though the number of crashes appears to decline over time at all sites. Results indicate that the reduction in the expected number of sites per mile has a steeper negative slope at converted than at control sites. Consistent with this, the forecasted reduction in the number of crashes per year and mile during the years after completion of the conversion at converted sites is more pronounced than at control sites. Seasonal effects on the number of crashes have been well-documented. In this dataset, we found that, as expected, the expected number of monthly crashes per mile tends to be higher during winter months than during the rest of the year. Perhaps more interestingly, we found that there is an interaction between the four to three-lane conversion and season; the reduction in the number of crashes appears to be more pronounced during months, when the weather is nice than during other times of the year, even though a reduction was estimated for the entire year. Thus, it appears that the four to three-lane conversion, while effective year-round, is particularly effective in reducing the expected number of crashes in nice weather.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: High rates of suicide have been described in HIV-infected patients, but it is unclear to what extent the introduction of highly active antiretroviral therapy (HAART) has affected suicide rates. The authors examined time trends and predictors of suicide in the pre-HAART (1988-1995) and HAART (1996-2008) eras in HIV-infected patients and the general population in Switzerland. METHOD: The authors analyzed data from the Swiss HIV Cohort Study and the Swiss National Cohort, a longitudinal study of mortality in the Swiss general population. The authors calculated standardized mortality ratios comparing HIV-infected patients with the general population and used Poisson regression to identify risk factors for suicide. RESULTS: From 1988 to 2008, 15,275 patients were followed in the Swiss HIV Cohort Study for a median duration of 4.7 years. Of these, 150 died by suicide (rate 158.4 per 100,000 person-years). In men, standardized mortality ratios declined from 13.7 (95% CI=11.0-17.0) in the pre-HAART era to 3.5 (95% CI=2.5-4.8) in the late HAART era. In women, ratios declined from 11.6 (95% CI=6.4-20.9) to 5.7 (95% CI=3.2-10.3). In both periods, suicide rates tended to be higher in older patients, in men, in injection drug users, and in patients with advanced clinical stage of HIV illness. An increase in CD4 cell counts was associated with a reduced risk of suicide. CONCLUSIONS: Suicide rates decreased significantly with the introduction of HAART, but they remain above the rate observed in the general population, and risk factors for suicide remain similar. HIV-infected patients remain an important target group for suicide prevention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nest plasticity of Cornitermes silvestrii (Isoptera, Termitidae, Syntermitinae) in response to flood pulse in the Pantanal, Mato Grosso, Brazil. The Pantanal is one of the largest wetlands in the world. Since many areas in Pantanal are flooded during part of the year, it is expected that plants and animals would have mechanisms for their survival during the flooded period. This study investigated the existence of differences in nest shape and inquilines of Cornitermes silvestrii in areas influenced by the flood pulse. We measured the volume, height, width, and height/width ratio of 32 nests in flooded areas and 27 in dry areas, and performed an one-way-Anova with the quasi-Poisson distribution to determine if there were differences in the nest measurements between the points. To analyze the relationship of nest inquilines to flood pulse and nest shape, we performed a regression with a Poisson distribution with the inquiline richness and flood pulse, and the above measurements. The nests of C. silvestrii in flooded areas were significantly higher than nests in dry areas, and had a larger height/width ratio. Colonies in periodically flooded areas would probably make a larger effort to extend their nests vertically, to maintain at least some portion of the structure out of the water and prevent the entire colony from being submerged. Neither the size of the nest nor the flood pulses influenced the assemblage of 11 species found in nests of C. silvestrii.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

DNA condensation observed in vitro with the addition of polyvalent counterions is due to intermolecular attractive forces. We introduce a quantitative model of these forces in a Brownian dynamics simulation in addition to a standard mean-field Poisson-Boltzmann repulsion. The comparison of a theoretical value of the effective diameter calculated from the second virial coefficient in cylindrical geometry with some experimental results allows a quantitative evaluation of the one-parameter attractive potential. We show afterward that with a sufficient concentration of divalent salt (typically approximately 20 mM MgCl(2)), supercoiled DNA adopts a collapsed form where opposing segments of interwound regions present zones of lateral contact. However, under the same conditions the same plasmid without torsional stress does not collapse. The condensed molecules present coexisting open and collapsed plectonemic regions. Furthermore, simulations show that circular DNA in 50% methanol solutions with 20 mM MgCl(2) aggregates without the requirement of torsional energy. This confirms known experimental results. Finally, a simulated DNA molecule confined in a box of variable size also presents some local collapsed zones in 20 mM MgCl(2) above a critical concentration of the DNA. Conformational entropy reduction obtained either by supercoiling or by confinement seems thus to play a crucial role in all forms of condensation of DNA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: Hypopituitarism is associated with an increased mortality rate but the reasons underlying this have not been fully elucidated. The purpose of this study was to evaluate mortality and associated factors within a large GH-replaced population of hypopituitary patients. DESIGN: In KIMS (Pfizer International Metabolic Database) 13,983 GH-deficient patients with 69,056 patient-years of follow-up were available. METHODS: This study analysed standardised mortality ratios (SMRs) by Poisson regression. IGF1 SDS was used as an indicator of adequacy of GH replacement. Statistical significance was set to P<0.05. RESULTS: All-cause mortality was 13% higher compared with normal population rates (SMR, 1.13; 95% confidence interval, 1.04-1.24). Significant associations were female gender, younger age at follow-up, underlying diagnosis of Cushing's disease, craniopharyngioma and aggressive tumour and presence of diabetes insipidus. After controlling for confounding factors, there were statistically significant negative associations between IGF1 SDS after 1, 2 and 3 years of GH replacement and SMR. For cause-specific mortality there was a negative association between 1-year IGF1 SDS and SMR for deaths from cardiovascular diseases (P=0.017) and malignancies (P=0.044). CONCLUSIONS: GH-replaced patients with hypopituitarism demonstrated a modest increase in mortality rate; this appears lower than that previously published in GH-deficient patients. Factors associated with increased mortality included female gender, younger attained age, aetiology and lower IGF1 SDS during therapy. These data indicate that GH replacement in hypopituitary adults with GH deficiency may be considered a safe treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.