200 resultados para PROBABILISTIC TELEPORTATION
Resumo:
Effective, statistically robust sampling and surveillance strategies form an integral component of large agricultural industries such as the grains industry. Intensive in-storage sampling is essential for pest detection, Integrated Pest Management (IPM), to determine grain quality and to satisfy importing nation’s biosecurity concerns, while surveillance over broad geographic regions ensures that biosecurity risks can be excluded, monitored, eradicated or contained within an area. In the grains industry, a number of qualitative and quantitative methodologies for surveillance and in-storage sampling have been considered. Primarily, research has focussed on developing statistical methodologies for in storage sampling strategies concentrating on detection of pest insects within a grain bulk, however, the need for effective and statistically defensible surveillance strategies has also been recognised. Interestingly, although surveillance and in storage sampling have typically been considered independently, many techniques and concepts are common between the two fields of research. This review aims to consider the development of statistically based in storage sampling and surveillance strategies and to identify methods that may be useful for both surveillance and in storage sampling. We discuss the utility of new quantitative and qualitative approaches, such as Bayesian statistics, fault trees and more traditional probabilistic methods and show how these methods may be used in both surveillance and in storage sampling systems.
Resumo:
The serviceability and safety of bridges are crucial to people’s daily lives and to the national economy. Every effort should be taken to make sure that bridges function safely and properly as any damage or fault during the service life can lead to transport paralysis, catastrophic loss of property or even casualties. Nonetheless, aggressive environmental conditions, ever-increasing and changing traffic loads and aging can all contribute to bridge deterioration. With often constrained budget, it is of significance to identify bridges and bridge elements that should be given higher priority for maintenance, rehabilitation or replacement, and to select optimal strategy. Bridge health prediction is an essential underpinning science to bridge maintenance optimization, since the effectiveness of optimal maintenance decision is largely dependent on the forecasting accuracy of bridge health performance. The current approaches for bridge health prediction can be categorised into two groups: condition ratings based and structural reliability based. A comprehensive literature review has revealed the following limitations of the current modelling approaches: (1) it is not evident in literature to date that any integrated approaches exist for modelling both serviceability and safety aspects so that both performance criteria can be evaluated coherently; (2) complex system modelling approaches have not been successfully applied to bridge deterioration modelling though a bridge is a complex system composed of many inter-related bridge elements; (3) multiple bridge deterioration factors, such as deterioration dependencies among different bridge elements, observed information, maintenance actions and environmental effects have not been considered jointly; (4) the existing approaches are lacking in Bayesian updating ability to incorporate a variety of event information; (5) the assumption of series and/or parallel relationship for bridge level reliability is always held in all structural reliability estimation of bridge systems. To address the deficiencies listed above, this research proposes three novel models based on the Dynamic Object Oriented Bayesian Networks (DOOBNs) approach. Model I aims to address bridge deterioration in serviceability using condition ratings as the health index. The bridge deterioration is represented in a hierarchical relationship, in accordance with the physical structure, so that the contribution of each bridge element to bridge deterioration can be tracked. A discrete-time Markov process is employed to model deterioration of bridge elements over time. In Model II, bridge deterioration in terms of safety is addressed. The structural reliability of bridge systems is estimated from bridge elements to the entire bridge. By means of conditional probability tables (CPTs), not only series-parallel relationship but also complex probabilistic relationship in bridge systems can be effectively modelled. The structural reliability of each bridge element is evaluated from its limit state functions, considering the probability distributions of resistance and applied load. Both Models I and II are designed in three steps: modelling consideration, DOOBN development and parameters estimation. Model III integrates Models I and II to address bridge health performance in both serviceability and safety aspects jointly. The modelling of bridge ratings is modified so that every basic modelling unit denotes one physical bridge element. According to the specific materials used, the integration of condition ratings and structural reliability is implemented through critical failure modes. Three case studies have been conducted to validate the proposed models, respectively. Carefully selected data and knowledge from bridge experts, the National Bridge Inventory (NBI) and existing literature were utilised for model validation. In addition, event information was generated using simulation to demonstrate the Bayesian updating ability of the proposed models. The prediction results of condition ratings and structural reliability were presented and interpreted for basic bridge elements and the whole bridge system. The results obtained from Model II were compared with the ones obtained from traditional structural reliability methods. Overall, the prediction results demonstrate the feasibility of the proposed modelling approach for bridge health prediction and underpin the assertion that the three models can be used separately or integrated and are more effective than the current bridge deterioration modelling approaches. The primary contribution of this work is to enhance the knowledge in the field of bridge health prediction, where more comprehensive health performance in both serviceability and safety aspects are addressed jointly. The proposed models, characterised by probabilistic representation of bridge deterioration in hierarchical ways, demonstrated the effectiveness and pledge of DOOBNs approach to bridge health management. Additionally, the proposed models have significant potential for bridge maintenance optimization. Working together with advanced monitoring and inspection techniques, and a comprehensive bridge inventory, the proposed models can be used by bridge practitioners to achieve increased serviceability and safety as well as maintenance cost effectiveness.
Resumo:
Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, "contextuality", is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, "entanglement", allows cognitive phenomena to be modelled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light...
Resumo:
Recent efforts in mission planning for underwater vehicles have utilised predictive models to aid in navigation, optimal path planning and drive opportunistic sampling. Although these models provide information at a unprecedented resolutions and have proven to increase accuracy and effectiveness in multiple campaigns, most are deterministic in nature. Thus, predictions cannot be incorporated into probabilistic planning frameworks, nor do they provide any metric on the variance or confidence of the output variables. In this paper, we provide an initial investigation into determining the confidence of ocean model predictions based on the results of multiple field deployments of two autonomous underwater vehicles. For multiple missions conducted over a two-month period in 2011, we compare actual vehicle executions to simulations of the same missions through the Regional Ocean Modeling System in an ocean region off the coast of southern California. This comparison provides a qualitative analysis of the current velocity predictions for areas within the selected deployment region. Ultimately, we present a spatial heat-map of the correlation between the ocean model predictions and the actual mission executions. Knowing where the model provides unreliable predictions can be incorporated into planners to increase the utility and application of the deterministic estimations.
Resumo:
The term “vagueness” describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno’s sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib’s and Pelletier’s (2011) theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substan- tial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon.
Resumo:
With the advent of large-scale wind farms and their integration into electrical grids, more uncertainties, constraints and objectives must be considered in power system development. It is therefore necessary to introduce risk-control strategies into the planning of transmission systems connected with wind power generators. This paper presents a probability-based multi-objective model equipped with three risk-control strategies. The model is developed to evaluate and enhance the ability of the transmission system to protect against overload risks when wind power is integrated into the power system. The model involves: (i) defining the uncertainties associated with wind power generators with probability measures and calculating the probabilistic power flow with the combined use of cumulants and Gram-Charlier series; (ii) developing three risk-control strategies by specifying the smallest acceptable non-overload probability for each branch and the whole system, and specifying the non-overload margin for all branches in the whole system; (iii) formulating an overload risk index based on the non-overload probability and the non-overload margin defined; and (iv) developing a multi-objective transmission system expansion planning (TSEP) model with the objective functions composed of transmission investment and the overload risk index. The presented work represents a superior risk-control model for TSEP in terms of security, reliability and economy. The transmission expansion planning model with the three risk-control strategies demonstrates its feasibility in the case study using two typical power systems
Resumo:
Background Total hip arthroplasty (THA) is a commonly performed procedure and numbers are increasing with ageing populations. One of the most serious complications in THA are surgical site infections (SSIs), caused by pathogens entering the wound during the procedure. SSIs are associated with a substantial burden for health services, increased mortality and reduced functional outcomes in patients. Numerous approaches to preventing these infections exist but there is no gold standard in practice and the cost-effectiveness of alternate strategies is largely unknown. Objectives The aim of this project was to evaluate the cost-effectiveness of strategies claiming to reduce deep surgical site infections following total hip arthroplasty in Australia. The objectives were: 1. Identification of competing strategies or combinations of strategies that are clinically relevant to the control of SSI related to hip arthroplasty 2. Evidence synthesis and pooling of results to assess the volume and quality of evidence claiming to reduce the risk of SSI following total hip arthroplasty 3. Construction of an economic decision model incorporating cost and health outcomes for each of the identified strategies 4. Quantification of the effect of uncertainty in the model 5. Assessment of the value of perfect information among model parameters to inform future data collection Methods The literature relating to SSI in THA was reviewed, in particular to establish definitions of these concepts, understand mechanisms of aetiology and microbiology, risk factors, diagnosis and consequences as well as to give an overview of existing infection prevention measures. Published economic evaluations on this topic were also reviewed and limitations for Australian decision-makers identified. A Markov state-transition model was developed for the Australian context and subsequently validated by clinicians. The model was designed to capture key events related to deep SSI occurring within the first 12 months following primary THA. Relevant infection prevention measures were selected by reviewing clinical guideline recommendations combined with expert elicitation. Strategies selected for evaluation were the routine use of pre-operative antibiotic prophylaxis (AP) versus no use of antibiotic prophylaxis (No AP) or in combination with antibiotic-impregnated cement (AP & ABC) or laminar air operating rooms (AP & LOR). The best available evidence for clinical effect size and utility parameters was harvested from the medical literature using reproducible methods. Queensland hospital data were extracted to inform patients’ transitions between model health states and related costs captured in assigned treatment codes. Costs related to infection prevention were derived from reliable hospital records and expert opinion. Uncertainty of model input parameters was explored in probabilistic sensitivity analyses and scenario analyses and the value of perfect information was estimated. Results The cost-effectiveness analysis was performed from a health services perspective using a hypothetical cohort of 30,000 THA patients aged 65 years. The baseline rate of deep SSI was 0.96% within one year of a primary THA. The routine use of antibiotic prophylaxis (AP) was highly cost-effective and resulted in cost savings of over $1.6m whilst generating an extra 163 QALYs (without consideration of uncertainty). Deterministic and probabilistic analysis (considering uncertainty) identified antibiotic prophylaxis combined with antibiotic-impregnated cement (AP & ABC) to be the most cost-effective strategy. Using AP & ABC generated the highest net monetary benefit (NMB) and an incremental $3.1m NMB compared to only using antibiotic prophylaxis. There was a very low error probability that this strategy might not have the largest NMB (<5%). Not using antibiotic prophylaxis (No AP) or using both antibiotic prophylaxis combined with laminar air operating rooms (AP & LOR) resulted in worse health outcomes and higher costs. Sensitivity analyses showed that the model was sensitive to the initial cohort starting age and the additional costs of ABC but the best strategy did not change, even for extreme values. The cost-effectiveness improved for a higher proportion of cemented primary THAs and higher baseline rates of deep SSI. The value of perfect information indicated that no additional research is required to support the model conclusions. Conclusions Preventing deep SSI with antibiotic prophylaxis and antibiotic-impregnated cement has shown to improve health outcomes among hospitalised patients, save lives and enhance resource allocation. By implementing a more beneficial infection control strategy, scarce health care resources can be used more efficiently to the benefit of all members of society. The results of this project provide Australian policy makers with key information about how to efficiently manage risks of infection in THA.
Resumo:
The railway industry has been slow to adopt limit states principles in the structural design of concrete sleepers for its tracks, despite the global take up of this form of design for almost every other type of structural element. Concrete sleeper design is still based on limiting stresses but is widely perceived by track engineers to lead to untapped reserves of strength in the sleepers. Limit design is a more rational philosophy, especially where it is based on the ultimate dynamic capacity of the concrete sleepers. The paper describes the development of equations and factors for a limit design methodology for concrete sleepers in flexure using a probabilistic evaluation of sleeper loading. The new method will also permit a cogent, defensible means of establishing the true capacity of the billions of concrete sleepers that are currently in-track around the world, leading to better utilisation of track infrastructure. The paper demonstrates how significant cost savings may be achieved by track owners.
Resumo:
Knowledge of cable parameters has been well established but a better knowledge of the environment in which the cables are buried lags behind. Research in Queensland University of Technology has been aimed at obtaining and analysing actual daily field values of thermal resistivity and diffusivity of the soil around power cables. On-line monitoring systems have been developed and installed with a data logger system and buried spheres that use an improved technique to measure thermal resistivity and diffusivity over a short period. Results based on long term continuous field data are given. A probabilistic approach is developed to establish the correlation between the measured field thermal resistivity values and rainfall data from weather bureau records. This data from field studies can reduce the risk in cable rating decisions and provide a basis for reliable prediction of “hot spot” of an existing cable circuit
Resumo:
This paper presents a reliability assessment of a substation, part of the Queensland transmission network in Australia. As part of a maintenance considerations, this study utilises the substation reliability assessment package STAREL to quantitatively compare the reliability improvement achieved by two circuit breaker reinforcement alternatives for Swanbank circuit breaker replacement or refurbishment. Substation reliability is interpreted on the basis of outage frequency and outage duration indices for each individual transmission line terminated in Swanbank 'B' substation. By considering the reliability indices in this paper with the cost associated conducted by POWERLINK Queensland, a Swanbank 'B' reinforcement alternative can be selected that optimises both transmission line security and the costs incurred in achieving it.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
Collaborative methods are promising tools for solving complex security tasks. In this context, the authors present the security overlay framework CIMD (Collaborative Intrusion and Malware Detection), enabling participants to state objectives and interests for joint intrusion detection and find groups for the exchange of security-related data such as monitoring or detection results accordingly; to these groups the authors refer as detection groups. First, the authors present and discuss a tree-oriented taxonomy for the representation of nodes within the collaboration model. Second, they introduce and evaluate an algorithm for the formation of detection groups. After conducting a vulnerability analysis of the system, the authors demonstrate the validity of CIMD by examining two different scenarios inspired sociology where the collaboration is advantageous compared to the non-collaborative approach. They evaluate the benefit of CIMD by simulation in a novel packet-level simulation environment called NeSSi (Network Security Simulator) and give a probabilistic analysis for the scenarios.
Resumo:
Power system operation and planning are facing increasing uncertainties especially with the deregulation process and increasing demand for power. Probabilistic power system stability assessment and probabilistic power system planning have been identified by EPRI as one of the important trends in power system operations and planning. Probabilistic small signal stability assessment studies the impact of system parameter uncertainties on system small disturbance stability characteristics. Researches in this area have covered many uncertainties factors such as controller parameter uncertainties and generation uncertainties. One of the most important factors in power system stability assessment is load dynamics. In this paper, composite load model is used to consider the uncertainties from load parameter uncertainties impact on system small signal stability characteristics. The results provide useful insight into the significant stability impact brought to the system by load dynamics. They can be used to help system operators in system operation and planning analysis.
Resumo:
Service robots that operate in human environments will accomplish tasks most efficiently and least disruptively if they have the capability to mimic and understand the motion patterns of the people in their workspace. This work demonstrates how a robot can create a humancentric navigational map online, and that this map re ects changes in the environment that trigger altered motion patterns of people. An RGBD sensor mounted on the robot is used to detect and track people moving through the environment. The trajectories are clustered online and organised into a tree-like probabilistic data structure which can be used to detect anomalous trajectories. A costmap is reverse engineered from the clustered trajectories that can then inform the robot's onboard planning process. Results show that the resultant paths taken by the robot mimic expected human behaviour and can allow the robot to respond to altered human motion behaviours in the environment.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.