28 resultados para information theoretic measures

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a prototype size selection method for a set of sample graphs. Our first contribution is to show how approximate set coding can be extended from the vector to graph domain. With this framework to hand we show how prototype selection can be posed as optimizing the mutual information between two partitioned sets of sample graphs. We show how the resulting method can be used for prototype graph size selection. In our experiments, we apply our method to a real-world dataset and investigate its performance on prototype size selection tasks. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms maximum likelihood.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The profusion of performance measurement models suggested by Management Accounting literature in the 1990’s is one illustration of the substantial changes in Management Accounting teaching materials since the publication of “Relevance Lost” in 1987. At the same time, in the general context of increasing competition and globalisation it is widely thought that national cultural differences are tending to disappear, meaning that management techniques used in large companies, including performance measurement and management instruments (PMS), tend to be the same, irrespective of the company nationality or location. North American management practice is traditionally described as a contractually based model, mainly focused on financial performance information and measures (FPMs), more shareholder-focused than French companies. Within France, literature historically defined performance as being broadly multidimensional, driven by the idea that there are no universal rules of management and that efficient management takes into account local culture and traditions. As opposed to their North American brethren, French companies are pressured more by the financial institutions that fund them rather than by capital markets. Therefore, they pay greater attention to the long-term because they are not subject to quarterly capital market objectives. Hence, management in France should rely more on long-term qualitative information, less financial, and more multidimensional data to assess performance than their North American counterparts. The objective of this research is to investigate whether large French and US companies’ practices have changed in the way the textbooks have changed with regards to performance measurement and management, or whether cultural differences are still driving differences in performance measurement and management between them. The research findings support the idea that large US and French companies share the same PMS features, influenced by ‘universal’ PM models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we use the quantum Jensen-Shannon divergence as a means of measuring the information theoretic dissimilarity of graphs and thus develop a novel graph kernel. In quantum mechanics, the quantum Jensen-Shannon divergence can be used to measure the dissimilarity of quantum systems specified in terms of their density matrices. We commence by computing the density matrix associated with a continuous-time quantum walk over each graph being compared. In particular, we adopt the closed form solution of the density matrix introduced in Rossi et al. (2013) [27,28] to reduce the computational complexity and to avoid the cumbersome task of simulating the quantum walk evolution explicitly. Next, we compare the mixed states represented by the density matrices using the quantum Jensen-Shannon divergence. With the quantum states for a pair of graphs described by their density matrices to hand, the quantum graph kernel between the pair of graphs is defined using the quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets from both bioinformatics and computer vision. The experimental results demonstrate the effectiveness of the proposed quantum graph kernel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural networks are statistical models and learning rules are estimators. In this paper a theory for measuring generalisation is developed by combining Bayesian decision theory with information geometry. The performance of an estimator is measured by the information divergence between the true distribution and the estimate, averaged over the Bayesian posterior. This unifies the majority of error measures currently in use. The optimal estimators also reveal some intricate interrelationships among information geometry, Banach spaces and sufficient statistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outcomes measures, which is the measurement of effectiveness of interventions and services has been propelled onto the health service agenda since the introduction of the internal market in the 1990s. It arose as a result of the escalating cost of inpatient care, the need to identify what interventions work and in what situations, and the desire for effective information by service users enabled by the consumerist agenda introduced by Working for Patients white paper. The research reported in this thesis is an assessment of the readiness of the forensic mental health service to measure outcomes of interventions. The research examines the type, prevalence and scope of use of outcomes measures, and further seeks a consensus of views of key stakeholders on the priority areas for future development. It discusses the theoretical basis for defining health and advocates the argument that the present focus on measuring effectiveness of care is misdirected without the input of users, particularly patients in their care, drawing together the views of the many stakeholders who have an interest in the provision of care in the service. The research further draws on the theory of structuration to demonstrate the degree to which a duality of action, which is necessary for the development, and use of outcomes measures is in place within the service. Consequently, it highlights some of the hurdles that need to be surmounted before effective measurement of health gain can be developed in the field of study. It concludes by advancing the view that outcomes research can enable practitioners to better understand the relationship between the illness of the patient and the efficacy of treatment. This understanding it is argued would contribute to improving dialogue between the health care practitioner and the patient, and further providing the information necessary for moving away from untested assumptions, which are numerous in the field about the superiority of one treatment approach over another.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are several studies on managing risks in information technology (IT) projects. Most of the studies identify and prioritise risks through empirical research in order to suggest mitigating measures. Although they are important to clients for future projects, these studies fail to provide any framework for risk management from IT developers' perspective. Although a few studies introduced a framework of risk management in IT projects, most of them are presented from clients' perspectives and very little effort has been made to integrate this with the project management cycle. As IT developers absorb a considerable amount of risk, an integrated framework for managing risks in IT projects from developers' perspective is needed in order to ensure success in IT projects. The main objective of the paper is to develop a risk management framework for IT projects from the developers' perspective. This study uses a combined qualitative and quantitative technique with the active involvement of stakeholders in order to identify, analyse and respond to risks. The entire methodology has been explained using a case study on an information technology project in a public sector organisation in Barbados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper will outline a research methodology informed by theorists who have contributed to actor network theory (ANT). Research informed from such a perspective recognizes the constitutive role of accounting systems in the achievement of broader social goals. Latour, Knoor Cetina and others argue that the bringing in of non-human actants, through the growth of technology and science, has added immeasurably to the complexity of modern society. The paper ‘sees’ accounting and accounting systems as being constituted by technological ‘black boxes’ and seeks to discuss two questions. One concerns the processes which surround the establishment of ‘facts’, i.e. how ‘black boxes’ are created or accepted (even if temporarily) within society. The second concerns the role of existing ‘black boxes’ within society and organizations. Accounting systems not only promote a particular view of the activities of an organization or a subunit, but in their very implementation and operation ‘mobilize’ other organizational members in a particular direction. The implications of such an interpretation are explored in this paper. Firstly through a discussion of some of the theoretic constructs that have been proposed to frame ANT research. Secondly an attempt is made to relate some of these ideas to aspects of the empirics in a qualitative case study. The case site is in the health sector and involves the implementation of a casemix accounting system. Evidence from the case research is used to exemplify aspects of the theoretical constructs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To investigate current use of the internet and eHealth amongst adults. Design Focus groups were conducted to explore participants' attitudes to and reasons for health internet use. Main outcome measures The focus group data were analysed and interpreted using thematic analysis. Results Three superordinate themes exploring eHealth behaviours were identified: decline in expert authority, pervasiveness of health information on the internet and empowerment. Results showed participants enjoyed the immediate benefits of eHealth information and felt empowered by increased knowledge, but they would be reluctant to lose face-to-face consultations with their GP. Conclusions Our findings illustrate changes in patient identity and a decline in expert authority with ramifications for the practitioner–patient relationship and subsequent implications for health management more generally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experiments combining different groups or factors and which use ANOVA are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the sample size required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for the error term of the ANOVA testing effects of particular interest. Finally, it is important to always consider the design of the experiment because this determines the appropriate ANOVA to use. Hence, it is necessary to be able to identify the different forms of ANOVA appropriate to different experimental designs and to recognise when a design is a split-plot or incorporates a repeated measure. If there is any doubt about which ANOVA to use in a specific circumstance, the researcher should seek advice from a statistician with experience of research in applied microbiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.