927 resultados para overlap probability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

How do agents with limited cognitive capacities flourish in informationally impoverished or unexpected circumstances? Aristotle argued that human flourishing emerged from knowing about the world and our place within it. If he is right, then the virtuous processes that produce knowledge, best explain flourishing. Influenced by Aristotle, virtue epistemology defends an analysis of knowledge where beliefs are evaluated for their truth and the intellectual virtue or competences relied on in their creation. However, human flourishing may emerge from how degrees of ignorance are managed in an uncertain world. Perhaps decision-making in the shadow of knowledge best explains human wellbeing—a Bayesian approach? In this dissertation I argue that a hybrid of virtue and Bayesian epistemologies explains human flourishing—what I term homeostatic epistemology. Homeostatic epistemology supposes that an agent has a rational credence p when p is the product of reliable processes aligned with the norms of probability theory; whereas an agent knows that p when a rational credence p is the product of reliable processes such that: 1) p meets some relevant threshold for belief (such that the agent acts as though p were true and indeed p is true), 2) p coheres with a satisficing set of relevant beliefs and, 3) the relevant set of beliefs is coordinated appropriately to meet the integrated aims of the agent. Homeostatic epistemology recognizes that justificatory relationships between beliefs are constantly changing to combat uncertainties and to take advantage of predictable circumstances. Contrary to holism, justification is built up and broken down across limited sets like the anabolic and catabolic processes that maintain homeostasis in the cells, organs and systems of the body. It is the coordination of choristic sets of reliably produced beliefs that create the greatest flourishing given the limitations inherent in the situated agent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bactrocera dorsalis sensu stricto, B. papayae, B. philippinensis and B. carambolae are serious pest fruit fly species of the B. dorsalis complex that predominantly occur in south-east Asia and the Pacific. Identifying molecular diagnostics has proven problematic for these four taxa, a situation that cofounds biosecurity and quarantine efforts and which may be the result of at least some of these taxa representing the same biological species. We therefore conducted a phylogenetic study of these four species (and closely related outgroup taxa) based on the individuals collected from a wide geographic range; sequencing six loci (cox1, nad4-3′, CAD, period, ITS1, ITS2) for approximately 20 individuals from each of 16 sample sites. Data were analysed within maximum likelihood and Bayesian phylogenetic frameworks for individual loci and concatenated data sets for which we applied multiple monophyly and species delimitation tests. Species monophyly was measured by clade support, posterior probability or bootstrap resampling for Bayesian and likelihood analyses respectively, Rosenberg's reciprocal monophyly measure, P(AB), Rodrigo's (P(RD)) and the genealogical sorting index, gsi. We specifically tested whether there was phylogenetic support for the four 'ingroup' pest species using a data set of multiple individuals sampled from a number of populations. Based on our combined data set, Bactrocera carambolae emerges as a distinct monophyletic clade, whereas B. dorsalis s.s., B. papayae and B. philippinensis are unresolved. These data add to the growing body of evidence that B. dorsalis s.s., B. papayae and B. philippinensis are the same biological species, which poses consequences for quarantine, trade and pest management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Migraine is a prevalent neurovascular disease with a significant genetic component. Linkage studies have so far identified migraine susceptibility loci on chromosomes 1, 4, 6, 11, 14, 19 and X. We performed a genome-wide scan of 92 Australian pedigrees phenotyped for migraine with and without aura and for a more heritable form of “severe” migraine. Multipoint non-parametric linkage analysis revealed suggestive linkage on chromosome 18p11 for the severe migraine phenotype (LOD*=2.32, P=0.0006) and chromosome 3q (LOD*=2.28, P=0.0006). Excess allele sharing was also observed at multiple different chromosomal regions, some of which overlap with, or are directly adjacent to, previously implicated migraine susceptibility regions. We have provided evidence for two loci involved in severe migraine susceptibility and conclude that dissection of the “migraine” phenotype may be helpful for identifying susceptibility genes that influence the more heritable clinical (symptom) profiles in affected pedigrees. Also, we concluded that the genetic aetiology of the common (International Headache Society) forms of the disease is probably comprised of a number of low to moderate effect susceptibility genes, perhaps acting synergistically, and this effect is not easily detected by traditional single-locus linkage analyses of large samples of affected pedigrees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper details the participation of the Australian e- Health Research Centre (AEHRC) in the ShARe/CLEF 2013 eHealth Evaluation Lab { Task 3. This task aims to evaluate the use of information retrieval (IR) systems to aid consumers (e.g. patients and their relatives) in seeking health advice on the Web. Our submissions to the ShARe/CLEF challenge are based on language models generated from the web corpus provided by the organisers. Our baseline system is a standard Dirichlet smoothed language model. We enhance the baseline by identifying and correcting spelling mistakes in queries, as well as expanding acronyms using AEHRC's Medtex medical text analysis platform. We then consider the readability and the authoritativeness of web pages to further enhance the quality of the document ranking. Measures of readability are integrated in the language models used for retrieval via prior probabilities. Prior probabilities are also used to encode authoritativeness information derived from a list of top-100 consumer health websites. Empirical results show that correcting spelling mistakes and expanding acronyms found in queries signi cantly improves the e ectiveness of the language model baseline. Readability priors seem to increase retrieval e ectiveness for graded relevance at early ranks (nDCG@5, but not precision), but no improvements are found at later ranks and when considering binary relevance. The authoritativeness prior does not appear to provide retrieval gains over the baseline: this is likely to be because of the small overlap between websites in the corpus and those in the top-100 consumer-health websites we acquired.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The output harmonic quality of N series connected full-bridge dc-ac inverters is investigated. The inverters are pulse width modulated using a common reference signal but randomly phased carrier signals. Through analysis and simulation, probability distributions for inverter output harmonics and vector representations of N carrier phases are combined and assessed. It is concluded that a low total harmonic distortion is most likely to occur and will decrease further as N increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A "self-exciting" market is one in which the probability of observing a crash increases in response to the occurrence of a crash. It essentially describes cases where the initial crash serves to weaken the system to some extent, making subsequent crashes more likely. This thesis investigates if equity markets possess this property. A self-exciting extension of the well-known jump-based Bates (1996) model is used as the workhorse model for this thesis, and a particle-filtering algorithm is used to facilitate estimation by means of maximum likelihood. The estimation method is developed so that option prices are easily included in the dataset, leading to higher quality estimates. Equilibrium arguments are used to price the risks associated with the time-varying crash probability, and in turn to motivate a risk-neutral system for use in option pricing. The option pricing function for the model is obtained via the application of widely-used Fourier techniques. An application to S&P500 index returns and a panel of S&P500 index option prices reveals evidence of self excitation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The arena of intellectual property encompasses streams that often interrelate and overlap in protecting different aspects of intellectual property. Australian commentators suggest that ‘one of the most troublesome areas in the entire field of intellectual property has been the relationship between copyright protection for artistic works under the Copyright Act 1968 (Cth) and protection for registered designs under the Designs Act 1906 (Cth).’ [McKeough, J., Stewart, A., & Griffith, P. (2004). Intellectual property in Australia (3rd ed.). Chatswood, NSW: Butterworths.] [Ricketson, S., Richardson, M., & Davison, M. (2009). Intellectual property: Cases, materials and commentary (4th ed.). Chatswood, NSW: LexisNexis Butterworths.] This overlap has caused much confusion for both creators of artistic works and industrial designs, as there is an uncertainty of whether protection against infringement is afforded under the Copyright Act 1988 (Cth) or whether the Designs Act 2003 (Cth) will apply. In Australia, there is limited precedent that examines the crossover between copyright and designs. Essentially, the cases that have tested this issue remain unclear as to whether a design applied industrially will invoke copyright protection. The cases demonstrate that there is an inconsistency in this area despite the aims of the new provisions of the Designs Act 2003 (Cth) to close the loopholes between copyright and designs. This paper will discuss and evaluate the relationship between copyright protection for artistic works and protection for registered designs with respect to the Designs Act 2003 (Cth).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mean action time is the mean of a probability density function that can be interpreted as a critical time, which is a finite estimate of the time taken for the transient solution of a reaction-diffusion equation to effectively reach steady state. For high-variance distributions, the mean action time under-approximates the critical time since it neglects to account for the spread about the mean. We can improve our estimate of the critical time by calculating the higher moments of the probability density function, called the moments of action, which provide additional information regarding the spread about the mean. Existing methods for calculating the nth moment of action require the solution of n nonhomogeneous boundary value problems which can be difficult and tedious to solve exactly. Here we present a simplified approach using Laplace transforms which allows us to calculate the nth moment of action without solving this family of boundary value problems and also without solving for the transient solution of the underlying reaction-diffusion problem. We demonstrate the generality of our method by calculating exact expressions for the moments of action for three problems from the biophysics literature. While the first problem we consider can be solved using existing methods, the second problem, which is readily solved using our approach, is intractable using previous techniques. The third problem illustrates how the Laplace transform approach can be used to study coupled linear reaction-diffusion equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article examines manual textual categorisation by human coders with the hypothesis that the law of total probability may be violated for difficult categories. An empirical evaluation was conducted to compare a one step categorisation task with a two step categorisation task using crowdsourcing. It was found that the law of total probability was violated. Both a quantum and classical probabilistic interpretations for this violation are presented. Further studies are required to resolve whether quantum models are more appropriate for this task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite a favourable morphology, anodized and ordered TiO2 nanotubes are incapable of showing electrochromic properties in comparison to many other metal oxide counterparts. To tackle this issue, MoO3 of 5 to 15 nm thickness was electrodeposited onto TiO2 nanotube arrays. A homogenous MoO3 coating was obtained and the crystal phase of the electrodeposited coating was determined to be α-MoO3. The electronic and optical augmentations of the MoO3 coated TiO2 platforms were evaluated through electrochromic measurements. The MoO3/TiO2 system showed a 4-fold increase in optical density over bare TiO2 when the thickness of the MoO3 coating was optimised. The enhancement was ascribed to (a) the α-MoO3 coating reducing the bandgap of the composite material, which shifted the band edge of the TiO2 platform, and subsequently increased the charge carrier transfer of the overall system and (b) the layered morphology of α-MoO3 that increased the intercalation probability and also provided direct pathways for charge carrier transfer.