950 resultados para Probabilistic metrics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 78A50

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 94A29, 94B70

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 54H25, 47H10.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 60J45, 60J50, 35Cxx; Secondary 31Cxx.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The traditional use of global and centralised control methods, fails for large, complex, noisy and highly connected systems, which typify many real world industrial and commercial systems. This paper provides an efficient bottom up design of distributed control in which many simple components communicate and cooperate to achieve a joint system goal. Each component acts individually so as to maximise personal utility whilst obtaining probabilistic information on the global system merely through local message-passing. This leads to an implied scalable and collective control strategy for complex dynamical systems, without the problems of global centralised control. Robustness is addressed by employing a fully probabilistic design, which can cope with inherent uncertainties, can be implemented adaptively and opens a systematic rich way to information sharing. This paper opens the foreseen direction and inspects the proposed design on a linearised version of coupled map lattice with spatiotemporal chaos. A version close to linear quadratic design gives an initial insight into possible behaviours of such networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Healthy brain functioning depends on efficient communication of information between brain regions, forming complex networks. By quantifying synchronisation between brain regions, a functionally connected brain network can be articulated. In neurodevelopmental disorders, where diagnosis is based on measures of behaviour and tasks, a measure of the underlying biological mechanisms holds promise as a potential clinical tool. Graph theory provides a tool for investigating the neural correlates of neuropsychiatric disorders, where there is disruption of efficient communication within and between brain networks. This research aimed to use recent conceptualisation of graph theory, along with measures of behaviour and cognitive functioning, to increase understanding of the neurobiological risk factors of atypical development. Using magnetoencephalography to investigate frequency-specific temporal dynamics at rest, the research aimed to identify potential biological markers derived from sensor-level whole-brain functional connectivity. Whilst graph theory has proved valuable for insight into network efficiency, its application is hampered by two limitations. First, its measures have hardly been validated in MEG studies, and second, graph measures have been shown to depend on methodological assumptions that restrict direct network comparisons. The first experimental study (Chapter 3) addressed the first limitation by examining the reproducibility of graph-based functional connectivity and network parameters in healthy adult volunteers. Subsequent chapters addressed the second limitation through adapted minimum spanning tree (a network analysis approach that allows for unbiased group comparisons) along with graph network tools that had been shown in Chapter 3 to be highly reproducible. Network topologies were modelled in healthy development (Chapter 4), and atypical neurodevelopment (Chapters 5 and 6). The results provided support to the proposition that measures of network organisation, derived from sensor-space MEG data, offer insights helping to unravel the biological basis of typical brain maturation and neurodevelopmental conditions, with the possibility of future clinical utility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rationing occurs if the demand for a certain good exceeds its supply. In such situations a rationing method has to be specified in order to determine the allocation of the scarce good to the agents. Moulin (1999) introduced the notion of probabilistic rationing methods for the discrete framework. In this paper we establish a link between classical and probabilistic rationing methods. In particular, we assign to any given classical rationing method a probabilistic rationing method with minimal variance among those probabilistic rationing methods, which result in the same expected distributions as the given classical rationing method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Az életben számtalan olyan esettel találkozunk, amikor egy jószág iránti kereslet meghaladja a rendelkezésre álló kínálatot. Példaként említhetjük a kárpótlási igényeket, egy csődbement cég hitelezőinek igényeit, valamely szerv átültetésére váró betegek sorát stb. Ilyen helyzetekben valamilyen eljárás szerint oszthatjuk el a szűkös mennyiséget a szereplők között. Szokás megkülönböztetni a determinisztikus és a sztochasztikus elosztási eljárásokat, jóllehet sok esetben csak a determinisztikus eljárásokat alkalmazzák. Azonban igazságossági szempontból gyakran használnak sztochasztikus elosztási eljárásokat is, mint például tette azt az Egyesült államok hadserege a második világháború végét követően a külföldön állomásozó katonáinak visszavonásakor, illetve a vietnami háború során behívandó személyek kiválasztásakor. / === / We investigated the minimal variance methods introduced in Tasnádi [6] based on seven popular axioms. We proved that if a deterministic rationing method satisfies demand monotonicity, resource monotonicity, equal treatment of equals and self-duality, than the minimal variance methods associated with the given deterministic rationing method also satisfies demand monotonicity, resource monotonicity, equal treatment of equals and self-duality. Furthermore, we found that the consistency, the lower composition and the upper composition of a deterministic rationing method does not imply the consistency, the lower composition and the upper composition of a minimal variance method associated with the given deterministic rationing method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many restaurant organizations have committed a substantial amount of effort to studying the relationship between a firm’s performance and its effort to develop an effective human resources management reward-and-retention system. These studies have produced various metrics for determining the efficacy of restaurant management and human resources management systems. This paper explores the best metrics to use when calculating the overall unit performance of casual restaurant managers. These metrics were identified through an exploratory qualitative case study method that included interviews with executives and a Delphi study. Experts proposed several diverse metrics for measuring management value and performance. These factors seem to represent all stakeholders’interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.