875 resultados para Exponential random graph models
Resumo:
Diarrhoea is one of the leading causes of morbidity and mortality in populations in developing countries and is a significant health issue throughout the world. Despite the frequency and the severity of the diarrhoeal disease, mechanisms of pathogenesis for many of the causative agents have been poorly characterised. Although implicated in a number of intestinal and extra-intestinal infections in humans, Plesiomonas shigelloides generally has been dismissed as an enteropathogen due to the lack of clearly demonstrated virulence-associated properties such as production of cytotoxins and enterotoxins or invasive abilities. However, evidence from a number of sources has indicated that this species may be the cause of a number of clinical infections. The work described in this thesis seeks to resolve this discrepancy by investigating the pathogenic potential of P. shigelloides using in vitro cell models. The focus of this research centres on how this organism interacts with human host cells in an experimental model. Very little is known about the pathogenic potential of P. shigel/oides and its mechanisms in human infections and disease. However, disease manifestations mimic those of other related microorganisms. Chapter 2 reviews microbial pathogenesis in general, with an emphasis on understanding the mechanisms resulting from infection with bacterial pathogens and the alterations in host cell biology. In addition, this review analyses the pathogenic status of a poorly-defined enteropathogen, P. shigelloides. Key stages of pathogenicity must occur in order for a bacterial pathogen to cause disease. Such stages include bacterial adherence to host tissue, bacterial entry into host tissues (usually required), multiplication within host tissues, evasion of host defence mechanisms and the causation of damage. In this study, these key strategies in infection and disease were sought to help assess the pathogenic potential of P. shigelloides (Chapter 3). Twelve isolates of P. shigelloides, obtained from clinical cases of gastroenteritis, were used to infect monolayers of human intestinal epithelial cells in vitro. Ultrastructural analysis demonstrated that P. shigelloides was able to adhere to the microvilli at the apical surface of the epithelial cells and also to the plasma membranes of both apical and basal surfaces. Furthermore, it was demonstrated that these isolates were able to enter intestinal epithelial cells. Internalised bacteria often were confined within vacuoles surrounded by single or multiple membranes. Observation of bacteria within membranebound vacuoles suggests that uptake of P. shigelloides into intestinal epithelial cells occurs via a process morphologically comparable to phagocytosis. Bacterial cells also were observed free in the host cell cytoplasm, indicating that P. shige/loides is able to escape from the surrounding vacuolar membrane and exist within the cytosol of the host. Plesiomonas shigelloides has not only been implicated in gastrointestinal infections, but also in a range of non-intestinal infections such as cholecystitis, proctitis, septicaemia and meningitis. The mechanisms by which P. shigelloides causes these infections are not understood. Previous research was unable to ascertain the pathogenic potential of P. shigel/oides using cells of non-intestinal origin (HEp-2 cells derived from a human larynx carcinoma and Hela cells derived from a cervical carcinoma). However, with the recent findings (from this study) that P. shigelloides can adhere to and enter intestinal cells, it was hypothesised, that P. shigel/oides would be able to enter Hela and HEp-2 cells. Six clinical isolates of P. shigelloides, which previously have been shown to be invasive to intestinally derived Caco-2 cells (Chapter 3) were used to study interactions with Hela and HEp-2 cells (Chapter 4). These isolates were shown to adhere to and enter both nonintestinal host cell lines. Plesiomonas shigelloides were observed within vacuoles surrounded by single and multiple membranes, as well as free in the host cell cytosol, similar to infection by P. shigelloides of Caco-2 cells. Comparisons of the number of bacteria adhered to and present intracellularly within Hela, HEp-2 and Caco-2 cells revealed a preference of P. shigelloides for Caco-2 cells. This study conclusively showed for the first time that P. shigelloides is able to enter HEp-2 and Hela cells, demonstrating the potential ability to cause an infection and/or disease of extra-intestinal sites in humans. Further high resolution ultrastructural analysis of the mechanisms involved in P. shigelloides adherence to intestinal epithelial cells (Chapter 5) revealed numerous prominent surface features which appeared to be involved in the binding of P. shige/loides to host cells. These surface structures varied in morphology from small bumps across the bacterial cell surface to much longer filaments. Evidence that flagella might play a role in bacterial adherence also was found. The hypothesis that filamentous appendages are morphologically expressed when in contact with host cells also was tested. Observations of bacteria free in the host cell cytosol suggests that P. shigelloides is able to lyse free from the initial vacuolar compartment. The vacuoles containing P. shigel/oides within host cells have not been characterised and the point at which P. shigelloides escapes from the surrounding vacuolar compartment has not been determined. A cytochemical detection assay for acid phosphatase, an enzymatic marker for lysosomes, was used to analyse the co-localisation of bacteria-containing vacuoles and acid phosphatase activity (Chapter 6). Acid phosphatase activity was not detected in these bacteria-containing vacuoles. However, the surface of many intracellular and extracellular bacteria demonstrated high levels of acid phosphatase activity, leading to the proposal of a new virulence factor for P. shigelloides. For many pathogens, the efficiency with which they adhere to and enter host cells is dependant upon the bacterial phase of growth. Such dependency reflects the timing of expression of particular virulence factors important for bacterial pathogenesis. In previous studies (Chapter 3 to Chapter 6), an overnight culture of P. shigelloides was used to investigate a number of interactions, however, it was unknown whether this allowed expression of bacterial factors to permit efficient P. shigelloides attachment and entry into human cells. In this study (Chapter 7), a number of clinical and environmental P. shigelloides isolates were investigated to determine whether adherence and entry into host cells in vitro was more efficient during exponential-phase or stationary-phase bacterial growth. An increase in the number of adherent and intracellular bacteria was demonstrated when bacteria were inoculated into host cell cultures in exponential phase cultures. This was demonstrated clearly for 3 out of 4 isolates examined. In addition, an increase in the morphological expression of filamentous appendages, a suggested virulence factor for P. shigel/oides, was observed for bacteria in exponential growth phase. These observations suggest that virulence determinants for P. shigel/oides may be more efficiently expressed when bacteria are in exponential growth phase. This study demonstrated also, for the first time, that environmental water isolates of P. shigelloides were able to adhere to and enter human intestinal cells in vitro. These isolates were seen to enter Caco-2 host cells through a process comparable to the clinical isolates examined. These findings support the hypothesis of a water transmission route for P. shigelloides infections. The results presented in this thesis contribute significantly to our understanding of the pathogenic mechanisms involved in P. shigelloides infections and disease. Several of the factors involved in P. shigelloides pathogenesis have homologues in other pathogens of the human intestine, namely Vibrio, Aeromonas, Salmonella, Shigella species and diarrhoeaassociated strains of Escherichia coli. This study emphasises the relevance of research into Plesiomonas as a means of furthering our understanding of bacterial virulence in general. As well it provides tantalising clues on normal and pathogenic host cell mechanisms.
Resumo:
Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.
Resumo:
Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.
Resumo:
Aim: This paper is a report of a study of variations in the pattern of nurse practitioner work in a range of service fields and geographical locations, across direct patient care, indirect patient care and service-related activities. Background. The nurse practitioner role has been implemented internationally as a service reform model to improve the access and timeliness of health care. There is a substantial body of research into the nurse practitioner role and service outcomes, but scant information on the pattern of nurse practitioner work and how this is influenced by different service models. --------- Methods: We used work sampling methods. Data were collected between July 2008 and January 2009. Observations were recorded from a random sample of 30 nurse practitioners at 10-minute intervals in 2-hour blocks randomly generated to cover two weeks of work time from a sampling frame of six weeks. --------- Results: A total of 12,189 individual observations were conducted with nurse practitioners across Australia. Thirty individual activities were identified as describing nurse practitioner work, and these were distributed across three categories. Direct care accounted for 36.1% of how nurse practitioners spend their time, indirect care accounted for 32.2% and service-related activities made up 31.9%. --------- Conclusion. These findings provide useful baseline data for evaluation of nurse practitioner positions and the service effect of these positions. However, the study also raises questions about the best use of nurse practitioner time and the influences of barriers to and facilitators of this model of service innovation.
Resumo:
A national-level safety analysis tool is needed to complement existing analytical tools for assessment of the safety impacts of roadway design alternatives. FHWA has sponsored the development of the Interactive Highway Safety Design Model (IHSDM), which is roadway design and redesign software that estimates the safety effects of alternative designs. Considering the importance of IHSDM in shaping the future of safety-related transportation investment decisions, FHWA justifiably sponsored research with the sole intent of independently validating some of the statistical models and algorithms in IHSDM. Statistical model validation aims to accomplish many important tasks, including (a) assessment of the logical defensibility of proposed models, (b) assessment of the transferability of models over future time periods and across different geographic locations, and (c) identification of areas in which future model improvements should be made. These three activities are reported for five proposed types of rural intersection crash prediction models. The internal validation of the model revealed that the crash models potentially suffer from omitted variables that affect safety, site selection and countermeasure selection bias, poorly measured and surrogate variables, and misspecification of model functional forms. The external validation indicated the inability of models to perform on par with model estimation performance. Recommendations for improving the state of the practice from this research include the systematic conduct of carefully designed before-and-after studies, improvements in data standardization and collection practices, and the development of analytical methods to combine the results of before-and-after studies with cross-sectional studies in a meaningful and useful way.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
This paper reports on the study of passenger experiences and how passengers interact with services, technology and processes at an airport. As part of our research, we have followed people through the airport from check-in to security and from security to boarding. Data was collected by approaching passengers in the departures concourse of the airport and asking for their consent to be videotaped. Data was collected and coded and the analysis focused on both discretionary and process related passenger activities. Our findings show the interdependence between activities and passenger experiences. Within all activities, passengers interact with processes, domain dependent technology, services, personnel and artifacts. These levels of interaction impact on passenger experiences and are interdependent. The emerging taxonomy of activities consists of (i) ownership related activities, (ii) group activities, (iii) individual activities (such as activities at the domain interfaces) and (iv) concurrent activities. This classification is contributing to the development of descriptive models of passenger experiences and how these activities affect the facilitation and design of future airports.
Resumo:
This paper reviews the main studies on transit users’ route choice in thecontext of transit assignment. The studies are categorized into three groups: static transit assignment, within-day dynamic transit assignment, and emerging approaches. The motivations and behavioural assumptions of these approaches are re-examined. The first group includes shortest-path heuristics in all-or-nothing assignment, random utility maximization route-choice models in stochastic assignment, and user equilibrium based assignment. The second group covers within-day dynamics in transit users’ route choice, transit network formulations, and dynamic transit assignment. The third group introduces the emerging studies on behavioural complexities, day-to-day dynamics, and real-time dynamics in transit users’ route choice. Future research directions are also discussed.
Resumo:
Advances in safety research—trying to improve the collective understanding of motor vehicle crash causation—rests upon the pursuit of numerous lines of inquiry. The research community has focused on analytical methods development (negative binomial specifications, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might think of different lines of inquiry in terms of ‘low lying fruit’—areas of inquiry that might provide significant improvements in understanding crash causation. It is the contention of this research that omitted variable bias caused by the exclusion of important variables is an important line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant ability to better understand contributing factors to crashes. This study—believed to represent a unique contribution to the safety literature—develops and examines the role of a sizeable set of spatial variables in intersection crash occurrence. In addition to commonly considered traffic and geometric variables, examined spatial factors include local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools. The results indicate that inclusion of these factors results in significant improvement in model explanatory power, and the results also generally agree with expectation. The research illuminates the importance of spatial variables in safety research and also the negative consequences of their omissions.
Resumo:
Crash prediction models are used for a variety of purposes including forecasting the expected future performance of various transportation system segments with similar traits. The influence of intersection features on safety have been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes compared to other segments in the transportation system. The effects of left-turn lanes at intersections in particular have seen mixed results in the literature. Some researchers have found that left-turn lanes are beneficial to safety while others have reported detrimental effects on safety. This inconsistency is not surprising given that the installation of left-turn lanes is often endogenous, that is, influenced by crash counts and/or traffic volumes. Endogeneity creates problems in econometric and statistical models and is likely to account for the inconsistencies reported in the literature. This paper reports on a limited-information maximum likelihood (LIML) estimation approach to compensate for endogeneity between left-turn lane presence and angle crashes. The effects of endogeneity are mitigated using the approach, revealing the unbiased effect of left-turn lanes on crash frequency for a dataset of Georgia intersections. The research shows that without accounting for endogeneity, left-turn lanes ‘appear’ to contribute to crashes; however, when endogeneity is accounted for in the model, left-turn lanes reduce angle crash frequencies as expected by engineering judgment. Other endogenous variables may lurk in crash models as well, suggesting that the method may be used to correct simultaneity problems with other variables and in other transportation modeling contexts.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros