48 resultados para Bayesian inference, Behaviour analysis, Security, Visual surveillance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems. © 2006 American Institute of Physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Background and Aims: Consumption of antioxidant nutrients can reduce the risk of progression of age-related macular degeneration (AMD) - the leading cause of visual impairment in adults over the age of 50 years in the UK. Lutein and zeaxanthin (L&Z) are of particular interest because they are selectively absorbed by the central retina. The objectives of this study were to analyse the dietary intake of a group of AMD patients, assess their ability to prepare and cook healthy food, and to make comparisons with people not affected by AMD. Methods: 158 participants with AMD were recruited via the UK charity The Macular Society, and fifty participants without AMD were recruited from optometric practice. A telephone interview was conducted by trained workers where participants completed a 24 hour food diary, and answered questions about cooking and shopping capabilities. Results: In the AMD group, the average L&Z intake was low in for both males and females. Those able to cook a hot meal consumed significantly more L&Z than those who were not able. Most participants were not consuming the recommended dietary allowance of fibre, calcium, vitamin D and E, and calorific intake was also lower than recommendations for their age-group. The non-AMD group consumed more kilocalories and more nutrients than the AMD group, but the L&Z intake was similar to those with AMD. The main factor that influenced participant’s food choices was personal preference. Conclusion: For an ‘informed’ population, many AMD participants were under-consuming nutrients considered to be useful for their condition. Participants without AMD were more likely to reach recommended daily allowance values for energy and a range of nutrients. It is therefore essential to design more effective dietary education and dissemination methods for people with, and at risk of, AMD.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Lifelong surveillance is not cost-effective after endovascular aneurysm repair (EVAR), but is required to detect aortic complications which are fatal if untreated (type 1/3 endoleak, sac expansion, device migration). Aneurysm morphology determines the probability of aortic complications and therefore the need for surveillance, but existing analyses have proven incapable of identifying patients at sufficiently low risk to justify abandoning surveillance. This study aimed to improve the prediction of aortic complications, through the application of machine-learning techniques. Patients undergoing EVAR at 2 centres were studied from 2004–2010. Aneurysm morphology had previously been studied to derive the SGVI Score for predicting aortic complications. Bayesian Neural Networks were designed using the same data, to dichotomise patients into groups at low- or high-risk of aortic complications. Network training was performed only on patients treated at centre 1. External validation was performed by assessing network performance independently of network training, on patients treated at centre 2. Discrimination was assessed by Kaplan-Meier analysis to compare aortic complications in predicted low-risk versus predicted high-risk patients. 761 patients aged 75 +/− 7 years underwent EVAR in 2 centres. Mean follow-up was 36+/− 20 months. Neural networks were created incorporating neck angu- lation/length/diameter/volume; AAA diameter/area/volume/length/tortuosity; and common iliac tortuosity/diameter. A 19-feature network predicted aor- tic complications with excellent discrimination and external validation (5-year freedom from aortic complications in predicted low-risk vs predicted high-risk patients: 97.9% vs. 63%; p < 0.0001). A Bayesian Neural-Network algorithm can identify patients in whom it may be safe to abandon surveillance after EVAR. This proposal requires prospective study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to develop and test a model of the role managers and peers play in shaping salespeople's ethical behaviour. The model specifies that sales manager personal moral philosophies, whether sales managers themselves are rewarded according to the outcomes or behaviours of their salespeople, sales team job security, intra-team cooperation, and sales team tactical performance all influence sales team ethical standards. In turn, ethical standards influence the probability that sales team members will behave (un)ethically when faced with ethical dilemmas. Design/methodology/approach – The model is tested on a sample of 154 Finnish sales managers. Data were collected via mail survey. Analysis was undertaken using structural equation modelling. Findings – Ethical standards appear to be shaped by several factors; behaviour-based management controls increase ethical standards, relativist managers tend to manage less ethically-minded sales teams, job insecurity impedes the development of ethical standards, and sales teams' cooperation activity increases ethical standards. Sales teams are less likely to engage in unethical behaviour when the teams have strong ethical standards. Research limitations/implications – Cross-sectional data limits generalisability; single country data may limit the ability to generalise to different sales environments; additional measure development is needed; identification of additional antecedent factors would be beneficial. Practical implications – Sales managers should consciously develop high ethical standards in sales teams if they wish to reduce unethical behaviour. Ethical standards can be improved if sales managers change their own outward behaviour (exhibit a less relativistic ethical philosophy), foster cooperation amongst salespeople, and develop perceptions of job security. How sales managers are rewarded may shape how they approach the management of ethical behaviour in their sales teams. Originality/value – This paper appears to be the first to simultaneously examine both sales manager-specific and sales team-specific antecedents to sales team ethical standards and behaviours. As such, it provides an important base for research in this critical area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The topography of the visual evoked magnetic response (VEMR) to a pattern onset stimulus was studied in five normal subjects using a single channel BTi magnetometer. Topographic distributions were analysed at regular intervals following stimulus onset (chronotopograpby). Two distinct field distributions were observed with half field stimulation: (1) activity corresponding to the C11 m which remains stable for an average of 34 msec and (2) activity corresponding to the C111 m which remains stable for about 50 msec. However, the full field topography of the largest peak within the first 130 msec does not have a predictable latency or topography in different subjects. The data suggest that the appearance of this peak is dependent on the amplitude, latency and duration of the half field C11 m peaks and the efficiency of half field summation. Hence, topographic mapping is essential to correctly identify the C11 m peak in a full field response as waveform morphology, peak latency and polarity are not reliable indicators. © 1993.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Distributed source analyses of half-field pattern onset visual evoked magnetic responses (VEMR) were carried out by the authors with a view to locating the source of the largest of the components, the CIIm. The analyses were performed using a series of realistic source spaces taking into account the anatomy of the visual cortex. Accuracy was enhanced by constraining the source distributions to lie within the visual cortex only. Further constraints on the source space yielded reliable, but possibly less meaningful, solutions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The proliferation of visual display terminals (VDTs) in offices is an international phenomenon. Numerous studies have investigated the health implications which can be categorised into visual problems, symptoms of musculo-skelctal discomfort, or psychosocial effects. The psychosocial effects are broader and there is mixed evidence in this area. The inconsistent results from the studies of VDT work so far undertaken may reflect several methodological shortcomings. In an attempt to overcome these deficiencies and to broaden the model of inter-relationships a model was developed to investigate their interactions and Ihc outputs of job satisfaction, stress and ill health. The study was a two-stage, long-term investigation with measures taken before the VDTs were introduced and the same measures taken 12 months after the 'go-live' date. The research was conducted in four offices of the Department of Social Security. The data were analysed for each individual site and in addition the total data were used in a path analysis model. Significant positive relationships were found at the pre-implementation stage between the musculo-skeletal discomfort, psychosomatic ailments, visual complaints and stress. Job satisfaction was negatively related to visual complaints and musculo-skeletal discomfort. Direct paths were found for age and job level with variety found in the job and age with job satisfaction and a negative relationship with the office environment. The only job characteristic which had a direct path to stress was 'dealing with others'. Similar inter-relationships were found in the post-implementation data. However, in addition attributes of the computer system, such as screen brightness and glare, were related positively with stress and negatively with job satisfaction. The comparison of the data at the two stages found that there had been no significant changes in the users' perceptions of their job characteristics and job satisfaction but there was a small and significant reduction in the stress measure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The thesis examines Kuhn's (1962, 1970) concept of paradigm, assesses how it is employed for mapping intellectual terrain in the social sciences, and evaluates it's use in research based on multiple theory positions. In so doing it rejects both the theses of total paradigm 'incommensurability' (Kuhn, 1962), and also of liberal 'translation' (Popper, 1970), in favour of a middle ground through the 'language-game of everyday life' (Wittgenstein, 1953). The thesis ultimately argues for the possibility of being 'trained-into' new paradigms, given the premise that 'unorganised experience cannot order perception' (Phillips, 1977). In conducting multiple paradigm research the analysis uses the Burrell and Morgan (1979) model for examining the work organisation of a large provincial fire Service. This analysis accounts for firstly, a 'functionalist' assessment of work design, demonstrating inter alia the decrease in reported motivation with length of service; secondly, an 'interpretive' portrayal of the daily accomplishment of task routines, highlighting the discretionary and negotiated nature of the day's events; thirdly, a 'radical humanist' analysis of workplace ideology, demonstrating the hegemonic role of officer training practices; and finally, a 'radical structuralist' description of the labour process, focusing on the establishment of a 'normal working day'. Although the argument is made for the possibility of conducting multiple paradigm research, the conclusion stresses the many institutional pressures serving to offset development.