983 resultados para Negative binomial


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advances in safety research—trying to improve the collective understanding of motor vehicle crash causation—rests upon the pursuit of numerous lines of inquiry. The research community has focused on analytical methods development (negative binomial specifications, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might think of different lines of inquiry in terms of ‘low lying fruit’—areas of inquiry that might provide significant improvements in understanding crash causation. It is the contention of this research that omitted variable bias caused by the exclusion of important variables is an important line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant ability to better understand contributing factors to crashes. This study—believed to represent a unique contribution to the safety literature—develops and examines the role of a sizeable set of spatial variables in intersection crash occurrence. In addition to commonly considered traffic and geometric variables, examined spatial factors include local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools. The results indicate that inclusion of these factors results in significant improvement in model explanatory power, and the results also generally agree with expectation. The research illuminates the importance of spatial variables in safety research and also the negative consequences of their omissions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A study was done to develop macrolevel crash prediction models that can be used to understand and identify effective countermeasures for improving signalized highway intersections and multilane stop-controlled highway intersections in rural areas. Poisson and negative binomial regression models were fit to intersection crash data from Georgia, California, and Michigan. To assess the suitability of the models, several goodness-of-fit measures were computed. The statistical models were then used to shed light on the relationships between crash occurrence and traffic and geometric features of the rural signalized intersections. The results revealed that traffic flow variables significantly affected the overall safety performance of the intersections regardless of intersection type and that the geometric features of intersections varied across intersection type and also influenced crash type.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The intent of this note is to succinctly articulate additional points that were not provided in the original paper (Lord et al., 2005) and to help clarify a collective reluctance to adopt zero-inflated (ZI) models for modeling highway safety data. A dialogue on this important issue, just one of many important safety modeling issues, is healthy discourse on the path towards improved safety modeling. This note first provides a summary of prior findings and conclusions of the original paper. It then presents two critical and relevant issues: the maximizing statistical fit fallacy and logic problems with the ZI model in highway safety modeling. Finally, we provide brief conclusions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Large trucks are involved in a disproportionately small fraction of the total crashes but a disproportionately large fraction of fatal crashes. Large truck crashes often result in significant congestion due to their large physical dimensions and from difficulties in clearing crash scenes. Consequently, preventing large truck crashes is critical to improving highway safety and operations. This study identifies high risk sites (hot spots) for large truck crashes in Arizona and examines potential risk factors related to the design and operation of the high risk sites. High risk sites were identified using both state of the practice methods (accident reduction potential using negative binomial regression with long crash histories) and a newly proposed method using Property Damage Only Equivalents (PDOE). The hot spots identified via the count model generally exhibited low fatalities and major injuries but large minor injuries and PDOs, while the opposite trend was observed using the PDOE methodology. The hot spots based on the count model exhibited large AADTs, whereas those based on the PDOE showed relatively small AADTs but large fractions of trucks and high posted speed limits. Documented site investigations of hot spots revealed numerous potential risk factors, including weaving activities near freeway junctions and ramps, absence of acceleration lanes near on-ramps, small shoulders to accommodate large trucks, narrow lane widths, inadequate signage, and poor lighting conditions within a tunnel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: To examine the visual predictors of falls and injurious falls among older adults with glaucoma. METHODS: Prospective falls data were collected for 71 community-dwelling adults with primary open-angle glaucoma, mean age 73.9 ± 5.7 years, for one year using monthly falls diaries. Baseline assessment of central visual function included high-contrast visual acuity and Pelli-Robson contrast sensitivity. Binocular integrated visual fields were derived from monocular Humphrey Field Analyser plots. Rate ratios (RR) for falls and injurious falls with 95% confidence intervals (CIs) were based on negative binomial regression models. RESULTS: During the one year follow-up, 31 (44%) participants experienced at least one fall and 22 (31%) experienced falls that resulted in an injury. Greater visual impairment was associated with increased falls rate, independent of age and gender. In a multivariate model, more extensive field loss in the inferior region was associated with higher rate of falls (RR 1.57, 95%CI 1.06, 2.32) and falls with injury (RR 1.80, 95%CI 1.12, 2.98), adjusted for all other vision measures and potential confounding factors. Visual acuity, contrast sensitivity, and superior field loss were not associated with the rate of falls; topical beta-blocker use was also not associated with increased falls risk. CONCLUSIONS: Falls are common among older adults with glaucoma and occur more frequently in those with greater visual impairment, particularly in the inferior field region. This finding highlights the importance of the inferior visual field region in falls risk and assists in identifying older adults with glaucoma at risk of future falls, for whom potential interventions should be targeted. KEY WORDS: glaucoma, visual field, visual impairment, falls, injury

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advances in safety research—trying to improve the collective understanding of motor vehicle crash causes and contributing factors—rest upon the pursuit of numerous lines of research inquiry. The research community has focused considerable attention on analytical methods development (negative binomial models, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might logically seek to know which lines of inquiry might provide the most significant improvements in understanding crash causation and/or prediction. It is the contention of this paper that the exclusion of important variables (causal or surrogate measures of causal variables) cause omitted variable bias in model estimation and is an important and neglected line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant opportunities to better understand contributing factors and/or causes of crashes. This study examines the role of important variables (other than Average Annual Daily Traffic (AADT)) that are generally omitted from intersection crash prediction models. In addition to the geometric and traffic regulatory information of intersection, the proposed model includes many spatial factors such as local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools—representing a mix of potential environmental and human factors that are theoretically important, but rarely used. Results suggest that these variables in addition to AADT have significant explanatory power, and their exclusion leads to omitted variable bias. Provided is evidence that variable exclusion overstates the effect of minor road AADT by as much as 40% and major road AADT by 14%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: To examine the effects of extremely cold and hot temperatures on ischaemic heart disease (IHD) mortality in five cities (Beijing, Tianjin, Shanghai, Wuhan and Guangzhou) in China; and to examine the time relationships between cold and hot temperatures and IHD mortality for each city. Design: A negative binomial regression model combined with a distributed lag non-linear model was used to examine city-specific temperature effects on IHD mortality up to 20 lag days. A meta-analysis was used to pool the cold effects and hot effects across the five cities. Patients: 16 559 IHD deaths were monitored by a sentinel surveillance system in five cities during 2004–2008. Results: The relationships between temperature and IHD mortality were non-linear in all five cities. The minimum-mortality temperatures in northern cities were lower than in southern cities. In Beijing, Tianjin and Guangzhou, the effects of extremely cold temperatures were delayed, while Shanghai and Wuhan had immediate cold effects. The effects of extremely hot temperatures appeared immediately in all the cities except Wuhan. Meta-analysis showed that IHD mortality increased 48% at the 1st percentile of temperature (extremely cold temperature) compared with the 10th percentile, while IHD mortality increased 18% at the 99th percentile of temperature (extremely hot temperature) compared with the 90th percentile. Conclusions: Results indicate that both extremely cold and hot temperatures increase IHD mortality in China. Each city has its characteristics of heat effects on IHD mortality. The policy for response to climate change should consider local climate–IHD mortality relationships.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Extending recent research on the importance of specific resources and skills for the internationalization of start-ups, this article tests a negative binomial model on a sample of 520 recently created high technology firms from the UK and Germany. The results show that previous international experience of entrepreneurs facilitates the rapid penetration of foreign markets, especially when the company features a clear and deliberate strategic intent of internationalization from the outset. This research provides one of the first empirical studies linking the influence of entrepreneurial teams to a high probability of success in the internationalization of high-technology ventures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.