914 resultados para Dissociation probability
Resumo:
Road agencies require comprehensive, relevan and quality data describing their road assets to support their investment decisions. An investment decision support system for raod maintenance and rehabilitation mainly comprise three important supporting elements namely: road asset data, decision support tools and criteria for decision-making. Probability-based methods have played a crucial role in helping decision makers understand the relationship among road related data, asset performance and uncertainties in estimating budgets/costs for road management investment. This paper presents applications of the probability-bsed method for road asset management.
Resumo:
Now in its sixth edition, the Traffic Engineering Handbook continues to be a must have publication in the transportation industry, as it has been for the past 60 years. The new edition provides updated information for people entering the practice and for those already practicing. The handbook is a convenient desk reference, as well as an all in one source of principles and proven techniques in traffic engineering. Most chapters are presented in a new format, which divides the chapters into four areas-basics, current practice, emerging trends and information sources. Chapter topics include road users, vehicle characteristics, statistics, planning for operations, communications, safety, regulations, traffic calming, access management, geometrics, signs and markings, signals, parking, traffic demand, maintenance and studies. In addition, as the focus in transportation has shifted from project based to operations based, two new chapters have been added-"Planning for Operations" and "Managing Traffic Demand to Address Congestion: Providing Travelers with Choices." The Traffic Engineering Handbook continues to be one of the primary reference sources for study to become a certified Professional Traffic Operations Engineer™. Chapters are authored by notable and experienced authors, and reviewed and edited by a distinguished panel of traffic engineering experts.
Resumo:
From a ‘cultural science’ perspective, this paper traces one aspect of a more general shift, from the realist representational regime of modernity to the productive DIY systems of the internet era. It argues that collecting and archiving is transformed by this change. Modern museums – and also broadcast television – were based on determinist or ‘essence’ theory; while internet archives like YouTube (and the internet as an archive) are based on ‘probability’ theory. The paper goes through the differences between modernist ‘essence’ and postmodern ‘probability’; starting from the obvious difference that in a museum each object is selected by experts for its intrinsic properties, while on the internet you don’t know what you will find. The status of individual objects is uncertain, although the productivity of the overall archive is unlimited. The paper links these differences with changes in contemporary culture – from a Newtonian to a quantum universe, progress to risk, institutional structure to evolutionary change, objectivity to uncertainty, identity to performance. Borrowing some of its methodology from science fiction, the paper uses examples from museums and online archives, ranging from the oldest stone tool in the world to the latest tribute vid on the net.
Resumo:
This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.
Resumo:
Skid resistance is a condition parameter characterising the contribution that a road makes to the friction between a road surface and a vehicle tyre. Studies of traffic crash histories around the world have consistently found that a disproportionate number of crashes occur where the road surface has a low level of surface friction and/or surface texture, particularly when the road surface is wet. Various research results have been published over many years and have tried to quantify the influence of skid resistance on accident occurrence and to characterise a correlation between skid resistance and accident frequency. Most of the research studies used simple statistical correlation methods in analysing skid resistance and crash data.----- ------ Preliminary findings of a systematic and extensive literature search conclude that there is rarely a single causation factor in a crash. Findings from research projects do affirm various levels of correlation between skid resistance and accident occurrence. Studies indicate that the level of skid resistance at critical places such as intersections, curves, roundabouts, ramps and approaches to pedestrian crossings needs to be well maintained.----- ----- Management of risk is an integral aspect of the Queensland Department of Main Roads (QDMR) strategy for managing its infrastructure assets. The risk-based approach has been used in many areas of infrastructure engineering. However, very limited information is reported on using risk-based approach to mitigate crash rates related to road surface. Low skid resistance and surface texture may increase the risk of traffic crashes.----- ----- The objectives of this paper are to explore current issues of skid resistance in relation to crashes, to provide a framework of probability-based approach to be adopted by QDMR in assessing the relationship between crash accidents and pavement properties, and to explain why the probability-based approach is a suitable tool for QDMR in order to reduce accident rates due to skid resistance.
Resumo:
Road accidents are of great concerns for road and transport departments around world, which cause tremendous loss and dangers for public. Reducing accident rates and crash severity are imperative goals that governments, road and transport authorities, and researchers are aimed to achieve. In Australia, road crash trauma costs the nation A$ 15 billion annually. Five people are killed, and 550 are injured every day. Each fatality costs the taxpayer A$1.7 million. Serious injury cases can cost the taxpayer many times the cost of a fatality. Crashes are in general uncontrolled events and are dependent on a number of interrelated factors such as driver behaviour, traffic conditions, travel speed, road geometry and condition, and vehicle characteristics (e.g. tyre type pressure and condition, and suspension type and condition). Skid resistance is considered one of the most important surface characteristics as it has a direct impact on traffic safety. Attempts have been made worldwide to study the relationship between skid resistance and road crashes. Most of these studies used the statistical regression and correlation methods in analysing the relationships between skid resistance and road crashes. The outcomes from these studies provided mix results and not conclusive. The objective of this paper is to present a probability-based method of an ongoing study in identifying the relationship between skid resistance and road crashes. Historical skid resistance and crash data of a road network located in the tropical east coast of Queensland were analysed using the probability-based method. Analysis methodology and results of the relationships between skid resistance, road characteristics and crashes are presented.
Resumo:
The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation and can also improve productivity and enhance system’s safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. Although a variety of prognostic methodologies have been reported recently, their application in industry is still relatively new and mostly focused on the prediction of specific component degradations. Furthermore, they required significant and sufficient number of fault indicators to accurately prognose the component faults. Hence, sufficient usage of health indicators in prognostics for the effective interpretation of machine degradation process is still required. Major challenges for accurate longterm prediction of remaining useful life (RUL) still remain to be addressed. Therefore, continuous development and improvement of a machine health management system and accurate long-term prediction of machine remnant life is required in real industry application. This thesis presents an integrated diagnostics and prognostics framework based on health state probability estimation for accurate and long-term prediction of machine remnant life. In the proposed model, prior empirical (historical) knowledge is embedded in the integrated diagnostics and prognostics system for classification of impending faults in machine system and accurate probability estimation of discrete degradation stages (health states). The methodology assumes that machine degradation consists of a series of degraded states (health states) which effectively represent the dynamic and stochastic process of machine failure. The estimation of discrete health state probability for the prediction of machine remnant life is performed using the ability of classification algorithms. To employ the appropriate classifier for health state probability estimation in the proposed model, comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault data of three different faults in a high pressure liquefied natural gas (HP-LNG) pump. As a result of this comparison study, SVMs were employed in heath state probability estimation for the prediction of machine failure in this research. The proposed prognostic methodology has been successfully tested and validated using a number of case studies from simulation tests to real industry applications. The results from two actual failure case studies using simulations and experiments indicate that accurate estimation of health states is achievable and the proposed method provides accurate long-term prediction of machine remnant life. In addition, the results of experimental tests show that the proposed model has the capability of providing early warning of abnormal machine operating conditions by identifying the transitional states of machine fault conditions. Finally, the proposed prognostic model is validated through two industrial case studies. The optimal number of health states which can minimise the model training error without significant decrease of prediction accuracy was also examined through several health states of bearing failure. The results were very encouraging and show that the proposed prognostic model based on health state probability estimation has the potential to be used as a generic and scalable asset health estimation tool in industrial machinery.
Resumo:
Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.
Resumo:
Starting from a local problem with finding an archival clip on YouTube, this paper expands to consider the nature of archives in general. It considers the technological, communicative and philosophical characteristics of archives over three historical periods: 1) Modern ‘essence archives’ – museums and galleries organised around the concept of objectivity and realism; 2) Postmodern mediation archives – broadcast TV systems, which I argue were also ‘essence archives,’ albeit a transitional form; and 3) Network or ‘probability archives’ – YouTube and the internet, which are organised around the concept of probability. The paper goes on to argue the case for introducing quantum uncertainty and other aspects of probability theory into the humanities, in order to understand the way knowledge is collected, conserved, curated and communicated in the era of the internet. It is illustrated throughout by reference to the original technological 'affordance' – the Olduvai stone chopping tool.
Resumo:
We demonstrate a modification of the algorithm of Dani et al for the online linear optimization problem in the bandit setting, which allows us to achieve an O( \sqrt{T ln T} ) regret bound in high probability against an adaptive adversary, as opposed to the in expectation result against an oblivious adversary of Dani et al. We obtain the same dependence on the dimension as that exhibited by Dani et al. The results of this paper rest firmly on those of Dani et al and the remarkable technique of Auer et al for obtaining high-probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.