942 resultados para negative binomial distribution
Resumo:
We investigated the Amblyomma fuscum load on a pullulating wild rodent population and the environmental and biological factors influencing the tick load on the hosts. One hundred and three individuals of Thrichomys laurentius were caught in an Atlantic forest fragment in northeastern Brazil, as part of a longitudinal survey on ticks infesting non-volant small mammals. Ticks (n = 342) were found on 45 individuals and the overall mean intensity of infestation was 7.6 ticks per infested rodent. Ticks were highly aggregated in the host population and the negative binomial distribution model provides a statistically satisfactory fit. The aggregated distribution was influenced by sex and age of the host. The microhabitat preference by T. laurentius probably increases contact opportunities between hosts and aggregated infesting stages of the ticks and represents important clues about the habitat suitability for A. fuscum.
Resumo:
BACKGROUND Estimating the prevalence of comorbidities and their associated costs in patients with diabetes is fundamental to optimizing health care management. This study assesses the prevalence and health care costs of comorbid conditions among patients with diabetes compared with patients without diabetes. Distinguishing potentially diabetes- and nondiabetes-related comorbidities in patients with diabetes, we also determined the most frequent chronic conditions and estimated their effect on costs across different health care settings in Switzerland. METHODS Using health care claims data from 2011, we calculated the prevalence and average health care costs of comorbidities among patients with and without diabetes in inpatient and outpatient settings. Patients with diabetes and comorbid conditions were identified using pharmacy-based cost groups. Generalized linear models with negative binomial distribution were used to analyze the effect of comorbidities on health care costs. RESULTS A total of 932,612 persons, including 50,751 patients with diabetes, were enrolled. The most frequent potentially diabetes- and nondiabetes-related comorbidities in patients older than 64 years were cardiovascular diseases (91%), rheumatologic conditions (55%), and hyperlipidemia (53%). The mean total health care costs for diabetes patients varied substantially by comorbidity status (US$3,203-$14,223). Patients with diabetes and more than two comorbidities incurred US$10,584 higher total costs than patients without comorbidity. Costs were significantly higher in patients with diabetes and comorbid cardiovascular disease (US$4,788), hyperlipidemia (US$2,163), hyperacidity disorders (US$8,753), and pain (US$8,324) compared with in those without the given disease. CONCLUSION Comorbidities in patients with diabetes are highly prevalent and have substantial consequences for medical expenditures. Interestingly, hyperacidity disorders and pain were the most costly conditions. Our findings highlight the importance of developing strategies that meet the needs of patients with diabetes and comorbidities. Integrated diabetes care such as used in the Chronic Care Model may represent a useful strategy.
Resumo:
Investigation into the medical care utilization of elderly Medicare enrollees in an HMO (Kaiser - Portland, Oregon): The specific research topics are: (1) The utilization of medical care by selected determinants such as: place of service, type of service, type of appointment, physician status, physician specialty and number of associated morbidities. (2) The attended prevalence of 3 chronic diseases: hypertension, diabetes and arthritis in addition to pneumonias as an example of acute diseases. The selection of these examples was based on their importance in morbidity/or mortality results among the elderly. The share of these diseases in outpatient and inpatient contacts was examined as an example of the relation between morbidity and medical care utilization. (3) The tendency of individual utilization patterns to persist in subsequent time periods. The concept of contagion or proneness was studied in a period of 2 years. Fitting the negative binomial and the Poisson distributions was applied to the utilization in the 2nd year conditional on that in the 1st year as regards outpatient and inpatient contacts.^ The present research is based on a longitudinal study of 20% random sample of elderly Medicare enrollees. The sample size is 1683 individuals during the period from August 1980-December 1982.^ The results of the research were: (1) The distribution of contacts by selected determinants did not reveal a consistent pattern between sexes and age groups. (2) The attended prevalence of hypertension and arthritis showed excess prevalence among females. For diabetes and pneumonias no female excess was noticed. Consistent increased prevalence with increasing age was not detected.^ There were important findings pertaining to the relatively big share of the combined 3 chronic diseases in utilization. They accounted for 20% of male outpatient contacts vs. 25% of female outpatients. For inpatient contacts, they consumed 20% in case of males vs. 24% in case of females. (3) Finding that the negative binomial distribution fit the utilization experience supported the research hypothesis concerning the concept of contagion in utilization. This important finding can be helpful in estimating liability functions needed for forecasting future utilization according to previous experience. Such information has its relevance to organization, administration and planning for medical care in general. (Abstract shortened with permission of author.) ^
Resumo:
Cystic echinococcosis, caused by Echinococcus grantilosus, is highly endemic in North Africa and the Middle East. This paper examines the abundance and prevalence of infection of E. granulosus in camels in Tunisia. No cysts were found in 103 camels from Kebili, whilst 19 of 188 camels from Benguerden (10.1%) were infected. Of the cysts found 95% were considered fertile with the presence of protoscolices and 80% of protoscolices were considered viable by their ability to exclude aqueous eosin. Molecular techniques were used on cyst material from camels and this demonstrated that the study animals were infected with the G1 sheep strain of E. granulosus. Observed data were fitted to a mathematical model by maximum likelihood techniques to define the parameters and their confidence limits and the negative binomial distribution was used to define the error variance in the observed data. The infection pressure to camels was somewhat lower in comparison to sheep reported in an earlier study. However, because camels are much longer-lived animals, the results of the model fit suggested that older camels have a relatively high prevalence rate, reaching a most likely value of 32% at age 15 years. This could represent an important source of transmission to dogs and hence indirectly to man of this zonotic strain. In common with similar studies on other species, there was no evidence of parasite-induced immunity in camels. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
Este estudio presenta la validación de las observaciones que realizó el programa de observación pesquera llamado Programa Bitácoras de Pesca (PBP) durante el periodo 2005 - 2011 en el área de distribución donde operan las embarcaciones industriales de cerco dedicadas a la pesca del stock norte-centro de la anchoveta peruana (Engraulis ringens). Además, durante ese mismo periodo y área de distribución, se estimó la magnitud del descarte por exceso de captura, descarte de juveniles y la captura incidental de dicha pesquera. Se observaron 3 768 viajes de un total de 302 859, representando un porcentaje de 1.2 %. Los datos del descarte por exceso de captura, descarte de juveniles y captura incidental registrados en los viajes observados, se caracterizaron por presentar un alta proporción de ceros. Para la validación de las observaciones, se realizó un estudio de simulación basado en la metodología de Monte Carlo usando un modelo de distribución binomial negativo. Esta permite inferir sobre el nivel de cobertura óptima y conocer si la información obtenida en el programa de observación es contable. De este análisis, se concluye que los niveles de observación actual se deberían incrementar hasta tener un nivel de cobertura de al menos el 10% del total de viajes que realicen en el año las embarcaciones industriales de cerco dedicadas a la pesca del stock norte-centro de la anchoveta peruana. La estimación del descarte por exceso de captura, descarte de juveniles y captura incidental se realizó mediante tres metodologías: Bootstrap, Modelo General Lineal (GLM) y Modelo Delta. Cada metodología estimó distintas magnitudes con tendencias similares. Las magnitudes estimadas fueron comparadas usando un ANOVA Bayesiano, la cual muestra que hubo escasa evidencia que las magnitudes estimadas del descarte por exceso de captura por metodología sean diferentes, lo mismo se presentó para el caso de la captura incidental, mientras que para el descarte de juveniles mostró que hubieron diferencias sustanciales de ser diferentes. La metodología que cumplió los supuestos y explico la mayor variabilidad de las variables modeladas fue el Modelo Delta, el cual parece ser una mejor alternativa para la estimación, debido a la alta proporción de ceros en los datos. Las estimaciones promedio del descarte por exceso de captura, descarte de juveniles y captura incidental aplicando el Modelo Delta, fueron 252 580, 41 772, 44 823 toneladas respectivamente, que en conjunto representaron el 5.74% de los desembarques. Además, con la magnitud de la estimación del descarte de juveniles, se realizó un ejercicio de proyección de biomasa bajo el escenario hipotético de no mortalidad por pesca y que los individuos juveniles descartados sólo presentaron tallas de 8 y 11 cm., en la cual se obtuvo que la biomasa que no estará disponible a la pesca está entre los 52 mil y 93 mil toneladas.
Resumo:
2016
Resumo:
In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.
Resumo:
Seasonal population dynamics of the digenean Phyllodistomum pawlovskii in the urinary bladder of the bullhead catfish, Pseudobagrus fulvidraco, were investigated in Liangzi Lake in the flood plain of the Yangtze River in China from February 2001 to July 2002. The overall prevalence of the parasite was high, 41.5% (n = 1,476), while the mean abundance was relatively low, 1.24 +/- 2.11. The parasite exhibited evident seasonality in changes of prevalence and abundance. In brief, prevalence and abundance were very low in midwinter (January), but increased and remained relatively high in other seasons and months. The distribution pattern of this parasite in the fish was overdispersed, with a variance to mean ratio > 1, but its frequency distribution could not be described by the negative binomial model. There were positive correlations between the number of the parasites per fish and the age and length of the fish; a peaked age-parasite abundance curve was not detected in the parasite-host association. It is suggested that the parasite P. pawlovskii has little effect on the population structure of the bullhead catfish.
Resumo:
Neutron diffraction at 11.4 and 295 K and solid-state 67Zn NMR are used to determine both the local and average structures in the disordered, negative thermal expansion (NTE) material, Zn(CN)2. Solid-state NMR not only confirms that there is head-to-tail disorder of the C≡N groups present in the solid, but yields information about the relative abundances of the different Zn(CN)4-n(NC)n tetrahedral species, which do not follow a simple binomial distribution. The Zn(CN)4 and Zn(NC)4 species occur with much lower probabilities than are predicted by binomial theory, supporting the conclusion that they are of higher energy than the other local arrangements. The lowest energy arrangement is Zn(CN)2(NC)2. The use of total neutron diffraction at 11.4 K, with analysis of both the Bragg diffraction and the derived total correlation function, yields the first experimental determination of the individual Zn−N and Zn−C bond lengths as 1.969(2) and 2.030(2) Å, respectively. The very small difference in bond lengths, of ~0.06 Å, means that it is impossible to obtain these bond lengths using Bragg diffraction in isolation. Total neutron diffraction also provides information on both the average and local atomic displacements responsible for NTE in Zn(CN)2. The principal motions giving rise to NTE are shown to be those in which the carbon and nitrogen atoms within individual Zn−C≡N−Zn linkages are displaced to the same side of the Zn···Zn axis. Displacements of the carbon and nitrogen atoms to opposite sides of the Zn···Zn axis, suggested previously in X-ray studies as being responsible for NTE behavior, in fact make negligible contribution at temperatures up to 295 K.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
It is important to examine the nature of the relationships between roadway, environmental, and traffic factors and motor vehicle crashes, with the aim to improve the collective understanding of causal mechanisms involved in crashes and to better predict their occurrence. Statistical models of motor vehicle crashes are one path of inquiry often used to gain these initial insights. Recent efforts have focused on the estimation of negative binomial and Poisson regression models (and related deviants) due to their relatively good fit to crash data. Of course analysts constantly seek methods that offer greater consistency with the data generating mechanism (motor vehicle crashes in this case), provide better statistical fit, and provide insight into data structure that was previously unavailable. One such opportunity exists with some types of crash data, in particular crash-level data that are collected across roadway segments, intersections, etc. It is argued in this paper that some crash data possess hierarchical structure that has not routinely been exploited. This paper describes the application of binomial multilevel models of crash types using 548 motor vehicle crashes collected from 91 two-lane rural intersections in the state of Georgia. Crash prediction models are estimated for angle, rear-end, and sideswipe (both same direction and opposite direction) crashes. The contributions of the paper are the realization of hierarchical data structure and the application of a theoretically appealing and suitable analysis approach for multilevel data, yielding insights into intersection-related crashes by crash type.
Resumo:
Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
Background Detection of outbreaks is an important part of disease surveillance. Although many algorithms have been designed for detecting outbreaks, few have been specifically assessed against diseases that have distinct seasonal incidence patterns, such as those caused by vector-borne pathogens. Methods We applied five previously reported outbreak detection algorithms to Ross River virus (RRV) disease data (1991-2007) for the four local government areas (LGAs) of Brisbane, Emerald, Redland and Townsville in Queensland, Australia. The methods used were the Early Aberration Reporting System (EARS) C1, C2 and C3 methods, negative binomial cusum (NBC), historical limits method (HLM), Poisson outbreak detection (POD) method and the purely temporal SaTScan analysis. Seasonally-adjusted variants of the NBC and SaTScan methods were developed. Some of the algorithms were applied using a range of parameter values, resulting in 17 variants of the five algorithms. Results The 9,188 RRV disease notifications that occurred in the four selected regions over the study period showed marked seasonality, which adversely affected the performance of some of the outbreak detection algorithms. Most of the methods examined were able to detect the same major events. The exception was the seasonally-adjusted NBC methods that detected an excess of short signals. The NBC, POD and temporal SaTScan algorithms were the only methods that consistently had high true positive rates and low false positive and false negative rates across the four study areas. The timeliness of outbreak signals generated by each method was also compared but there was no consistency across outbreaks and LGAs. Conclusions This study has highlighted several issues associated with applying outbreak detection algorithms to seasonal disease data. In lieu of a true gold standard, a quantitative comparison is difficult and caution should be taken when interpreting the true positives, false positives, sensitivity and specificity.