914 resultados para Dissociation probability
Resumo:
BACKGROUND: The presence of insects in stored grains is a significant problem for grain farmers, bulk grain handlers and distributors worldwide. Inspections of bulk grain commodities is essential to detect pests and therefore to reduce the risk of their presence in exported goods. It has been well documented that insect pests cluster in response to factors such as microclimatic conditions within bulk grain. Statistical sampling methodologies for grains, however, have typically considered pests and pathogens to be homogeneously distributed throughout grain commodities. In this paper we demonstrate a sampling methodology that accounts for the heterogeneous distribution of insects in bulk grains. RESULTS: We show that failure to account for the heterogeneous distribution of pests may lead to overestimates of the capacity for a sampling program to detect insects in bulk grains. Our results indicate the importance of the proportion of grain that is infested in addition to the density of pests within the infested grain. We also demonstrate that the probability of detecting pests in bulk grains increases as the number of sub-samples increases, even when the total volume or mass of grain sampled remains constant. CONCLUSION: This study demonstrates the importance of considering an appropriate biological model when developing sampling methodologies for insect pests. Accounting for a heterogeneous distribution of pests leads to a considerable improvement in the detection of pests over traditional sampling models.
Resumo:
This paper explores what determines the survival of people in a life–and-death situation. The sinking of the Titanic allows us to inquire whether pro-social behavior matters in such extreme situations. This event can be considered a quasi-natural experiment. The empirical results suggest that social norms such as ‘women and children first’ are persevered during such an event. Women of reproductive age and crew members had a higher probability of survival. Passenger class, fitness, group size, and cultural background also mattered.
Resumo:
A statistical modeling method to accurately determine combustion chamber resonance is proposed and demonstrated. This method utilises Markov-chain Monte Carlo (MCMC) through the use of the Metropolis-Hastings (MH) algorithm to yield a probability density function for the combustion chamber frequency and find the best estimate of the resonant frequency, along with uncertainty. The accurate determination of combustion chamber resonance is then used to investigate various engine phenomena, with appropriate uncertainty, for a range of engine cycles. It is shown that, when operating on various ethanol/diesel fuel combinations, a 20% substitution yields the least amount of inter-cycle variability, in relation to combustion chamber resonance.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.
Resumo:
Now in its second edition, this book describes tools that are commonly used in transportation data analysis. The first part of the text provides statistical fundamentals while the second part presents continuous dependent variable models. With a focus on count and discrete dependent variable models, the third part features new chapters on mixed logit models, logistic regression, and ordered probability models. The last section provides additional coverage of Bayesian statistical modeling, including Bayesian inference and Markov chain Monte Carlo methods. Data sets are available online to use with the modeling techniques discussed.
Resumo:
Survival probability prediction using covariate-based hazard approach is a known statistical methodology in engineering asset health management. We have previously reported the semi-parametric Explicit Hazard Model (EHM) which incorporates three types of information: population characteristics; condition indicators; and operating environment indicators for hazard prediction. This model assumes the baseline hazard has the form of the Weibull distribution. To avoid this assumption, this paper presents the non-parametric EHM which is a distribution-free covariate-based hazard model. In this paper, an application of the non-parametric EHM is demonstrated via a case study. In this case study, survival probabilities of a set of resistance elements using the non-parametric EHM are compared with the Weibull proportional hazard model and traditional Weibull model. The results show that the non-parametric EHM can effectively predict asset life using the condition indicator, operating environment indicator, and failure history.
Resumo:
Maintenance activities in a large-scale engineering system are usually scheduled according to the lifetimes of various components in order to ensure the overall reliability of the system. Lifetimes of components can be deduced by the corresponding probability distributions with parameters estimated from past failure data. While failure data of the components is not always readily available, the engineers have to be content with the primitive information from the manufacturers only, such as the mean and standard deviation of lifetime, to plan for the maintenance activities. In this paper, the moment-based piecewise polynomial model (MPPM) are proposed to estimate the parameters of the reliability probability distribution of the products when only the mean and standard deviation of the product lifetime are known. This method employs a group of polynomial functions to estimate the two parameters of the Weibull Distribution according to the mathematical relationship between the shape parameter of two-parameters Weibull Distribution and the ratio of mean and standard deviation. Tests are carried out to evaluate the validity and accuracy of the proposed methods with discussions on its suitability of applications. The proposed method is particularly useful for reliability-critical systems, such as railway and power systems, in which the maintenance activities are scheduled according to the expected lifetimes of the system components.
Resumo:
Power load flow analysis is essential for system planning, operation, development and maintenance. Its application on railway supply system is no exception. Railway power supplies system distinguishes itself in terms of load pattern and mobility, as well as feeding system structure. An attempt has been made to apply probability load flow (PLF) techniques on electrified railways in order to examine the loading on the feeding substations and the voltage profiles of the trains. This study is to formulate a simple and reliable model to support the necessary calculations for probability load flow analysis in railway systems with autotransformer (AT) feeding system, and describe the development of a software suite to realise the computation.
Resumo:
In this paper, we present a ∑GIi/D/1/∞ queue with heterogeneous input/output slot times. This queueing model can be regarded as an extension of the ordinary GI/D/1/∞ model. For this ∑GIi/D/1/∞ queue, we assume that several input streams arrive at the system according to different slot times. In other words, there are different slot times for different input/output processes in the queueing model. The queueing model can therefore be used for an ATM multiplexer with heterogeneous input/output link capacities. Several cases of the queueing model are discussed to reflect different relationships among the input/output link capacities of an ATM multiplexer. In the queueing analysis, two approaches: the Markov model and the probability generating function technique, are adopted to develop the queue length distributions observed at different epochs. This model is particularly useful in the performance analysis of ATM multiplexers with heterogeneous input/output link capacities.
Resumo:
From the business viewpoint, the railway timetable is a list of the products presented by the railway transportation operators to the customers, specifying the schedules of all the train services on a railway line or network. In order to evaluate the quality of the train service schedules, a number of indices are proposed in this paper. These indices primarily take the passengers’ needs, such as waiting time, transfer time and transport capacity into consideration. Delay rate is usually used in post-evaluation. In this study, we propose to give an evaluation on the probability that the scheduled train services are likely to be delayed and the recovery ability of the timetable after delay has occurred. The evaluation identifies the possible problems in the services, such as excessive waiting time, non-seamless transfer, and high possibility of delay. This paper also discusses the improvement of these problems through certain adjustments on the timetable. The indices for evaluation and the adjustment method on timetable are then applied to a case study on the Hu-Ning-Hang railway in China, followed by the discussions of the merits of the proposed indices for timetable evaluation and possible improvement.
Resumo:
We review all journal articles based on “PSED-type” research, i.e., longitudinal, empirical studies of large probability samples of on-going, business start-up efforts. We conclude that the research stream has yielded interesting findings; sometimes by confirming prior research with a less bias-prone methodology and at other times by challenging whether prior conclusions are valid for the early stages of venture development. Most importantly, the research has addressed new, process-related research questions that prior research has shunned or been unable to study in a rigorous manner. The research has revealed an enormous and fascinating variability in new venture creation that also makes it challenging to arrive at broadly valid generalizations. An analysis of the findings across studies as well as an examination of those studies that have been relatively more successful at explaining outcomes give good guidance regarding what is required in order to achieve strong and credible results. We compile and present such advice to users of existing data sets and designers of new projects in the following areas: Statistically representative and/or theoretically relevant sampling; Level of analysis issues; Dealing with process heterogeneity; Dealing with other heterogeneity issues, and Choice and interpretation of dependent variables.
Resumo:
This paper presents techniques which can be viewed as pre-processing step towards diagnosis of faults in a small size multi-cylinder diesel engine. Preliminary analysis of the acoustic emission (AE) signals is outlined, including time-frequency analysis, selection of optimum frequency band. Some results of applying mean field independent component analysis (MFICA) to separate the AE root mean square (RMS) signals are also outlined. The results on separation of RMS signals show this technique has the potential of increasing the probability to successfully identify the AE events associated with the various mechanical events.
Resumo:
Objective: Diarrhoea in the enterally tube fed (ETF) intensive care unit (ICU) patient is a multifactorial problem. Diarrhoeal aetiologies in this patient cohort remain debatable; however, the consequences of diarrhoea have been well established and include electrolyte imbalance, dehydration, bacterial translocation, peri anal wound contamination and sleep deprivation. This study examined the incidence of diarrhoea and explored factors contributing to the development of diarrhoea in the ETF, critically ill, adult patient. ---------- Method: After institutional ethical review and approval, a single centre medical chart audit was undertaken to examine the incidence of diarrhoea in ETF, critically ill patients. Retrospective, non-probability sequential sampling was used of all emergency admission adult ICU patients who met the inclusion/exclusion criteria. ---------- Results: Fifty patients were audited. Faecal frequency, consistency and quantity were considered important criteria in defining ETF diarrhoea. The incidence of diarrhoea was 78%. Total patient diarrhoea days (r = 0.422; p = 0.02) and total diarrhoea frequency (r = 0.313; p = 0.027) increased when the patient was ETF for longer periods of time. Increased severity of illness, peripheral oxygen saturation (Sp02), glucose control, albumin and white cell count were found to be statistically significant factors for the development of diarrhoea. ---------- Conclusion: Diarrhoea in ETF critically ill patients is multi-factorial. The early identification of diarrhoea risk factors and the development of a diarrhoea risk management algorithm is recommended.
Resumo:
Aims To assess self-reported lifetime prevalence of cardiovascular disease (CVD) among colorectal cancer survivors, and examine the cross-sectional and prospective associations of lifestyle factors with co-morbid CVD. Methods Colorectal cancer survivors were recruited (n = 1966). Data were collected at approximately 5, 12, 24 and 36 months post-diagnosis. Cross-sectional findings included six CVD categories (hypercholesterolaemia, hypertension, diabetes, heart failure, kidney disease and ischaemic heart disease (IHD)) at 5 months post-diagnosis. Longitudinal outcomes included the probability of developing (de novo) co-morbid CVD by 36 months post-diagnosis. Lifestyle factors included body mass index, physical activity, television (TV) viewing, alcohol consumption and smoking. Results Co-morbid CVD prevalence at 5 months post-diagnosis was 59%, and 16% of participants with no known CVD at the baseline reported de novo CVD by 36 months. Obesity at the baseline predicted de novo hypertension (odds ratio [OR] = 2.20, 95% confidence intervals [CI] = 1.09, 4.45) and de novo diabetes (OR = 6.55, 95% CI = 2.19, 19.53). Participants watching >4 h of TV/d at the baseline (compared with <2 h/d) were more likely to develop ischaemic heart disease by 36 months (OR = 5.51, 95% CI = 1.86, 16.34). Conclusion Overweight colorectal cancer survivors were more likely to suffer from co-morbid CVD. Interventions focusing on weight management and other modifiable lifestyle factors may reduce functional decline and improve survival.