36 resultados para categorization IT PFC computational neuroscience model HMAX
em Aston University Research Archive
Resumo:
This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.
Resumo:
Urinary bladder diseases are a common problem throughout the world and often difficult to accurately diagnose. Furthermore, they pose a heavy financial burden on health services. Urinary bladder tissue from male pigs was spectrophotometrically measured and the resulting data used to calculate the absorption, transmission, and reflectance parameters, along with the derived coefficients of scattering and absorption. These were employed to create a "generic" computational bladder model based on optical properties, simulating the propagation of photons through the tissue at different wavelengths. Using the Monte-Carlo method and fluorescence spectra of UV and blue excited wavelength, diagnostically important biomarkers were modeled. Additionally, the multifunctional noninvasive diagnostics system "LAKK-M" was used to gather fluorescence data to further provide essential comparisons. The ultimate goal of the study was to successfully simulate the effects of varying excited radiation wavelengths on bladder tissue to determine the effectiveness of photonics diagnostic devices. With increased accuracy, this model could be used to reliably aid in differentiating healthy and pathological tissues within the bladder and potentially other hollow organs.
Resumo:
The use of the multiple indicators, multiple causes model to operationalize formative variables (the formative MIMIC model) is advocated in the methodological literature. Yet, contrary to popular belief, the formative MIMIC model does not provide a valid method of integrating formative variables into empirical studies and we recommend discarding it from formative models. Our arguments rest on the following observations. First, much formative variable literature appears to conceptualize a causal structure between the formative variable and its indicators which can be tested or estimated. We demonstrate that this assumption is illogical, that a formative variable is simply a researcher-defined composite of sub-dimensions, and that such tests and estimates are unnecessary. Second, despite this, researchers often use the formative MIMIC model as a means to include formative variables in their models and to estimate the magnitude of linkages between formative variables and their indicators. However, the formative MIMIC model cannot provide this information since it is simply a model in which a common factor is predicted by some exogenous variables—the model does not integrate within it a formative variable. Empirical results from such studies need reassessing, since their interpretation may lead to inaccurate theoretical insights and the development of untested recommendations to managers. Finally, the use of the formative MIMIC model can foster fuzzy conceptualizations of variables, particularly since it can erroneously encourage the view that a single focal variable is measured with formative and reflective indicators. We explain these interlinked arguments in more detail and provide a set of recommendations for researchers to consider when dealing with formative variables.
Resumo:
This paper focusses on attracting and retaining young people into technical disciplines. It introduces a new model of technical education from age 14 that the UK Government initiated in 2008. A concept of University led Technical Colleges (UTCs) for 14-19 year olds. These state supported schools, sponsored by a University, have technical curricula, technologically enabled learning environments and strong engagement with employers. As new schools they have been able to recruit outstanding staff that are conversant with the use of technology to enhance learning and all students have their own iPads. The Aston University Engineering Academy opened in September 2012 and a recent survey of staff, students and parents has provided both qualitative and quantitative data on the benefits to motivation and learning of these embedded iPads. The devices have also had advantages for the management of data on student achievement from a leadership, teaching staff and parental view point.
Resumo:
Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.
Resumo:
We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.
Resumo:
This article examines whether UK portfolio returns are time varying so that expected returns follow an AR(1) process as proposed by Conrad and Kaul for the USA. It explores this hypothesis for four portfolios that have been formed on the basis of market capitalization. The portfolio returns are modelled using a kalman filter signal extraction model in which the unobservable expected return is the state variable and is allowed to evolve as a stationary first order autoregressive process. It finds that this model is a good representation of returns and can account for most of the autocorrelation present in observed portfolio returns. This study concludes that UK portfolio returns are time varying and the nature of the time variation appears to introduce a substantial amount of autocorrelation to portfolio returns. Like Conrad and Kaul if finds a link between the extent to which portfolio returns are time varying and the size of firms within a portfolio but not the monotonic one found for the USA.
Resumo:
In 2002, we published a paper [Brock, J., Brown, C., Boucher, J., Rippon, G., 2002. The temporal binding deficit hypothesis of autism. Development and Psychopathology 142, 209-224] highlighting the parallels between the psychological model of 'central coherence' in information processing [Frith, U., 1989. Autism: Explaining the Enigma. Blackwell, Oxford] and the neuroscience model of neural integration or 'temporal binding'. We proposed that autism is associated with abnormalities of information integration that is caused by a reduction in the connectivity between specialised local neural networks in the brain and possible overconnectivity within the isolated individual neural assemblies. The current paper updates this model, providing a summary of theoretical and empirical advances in research implicating disordered connectivity in autism. This is in the context of changes in the approach to the core psychological deficits in autism, of greater emphasis on 'interactive specialisation' and the resultant stress on early and/or low-level deficits and their cascading effects on the developing brain [Johnson, M.H., Halit, H., Grice, S.J., Karmiloff-Smith, A., 2002. Neuroimaging of typical and atypical development: a perspective from multiple levels of analysis. Development and Psychopathology 14, 521-536].We also highlight recent developments in the measurement and modelling of connectivity, particularly in the emerging ability to track the temporal dynamics of the brain using electroencephalography (EEG) and magnetoencephalography (MEG) and to investigate the signal characteristics of this activity. This advance could be particularly pertinent in testing an emerging model of effective connectivity based on the balance between excitatory and inhibitory cortical activity [Rubenstein, J.L., Merzenich M.M., 2003. Model of autism: increased ratio of excitation/inhibition in key neural systems. Genes, Brain and Behavior 2, 255-267; Brown, C., Gruber, T., Rippon, G., Brock, J., Boucher, J., 2005. Gamma abnormalities during perception of illusory figures in autism. Cortex 41, 364-376]. Finally, we note that the consequence of this convergence of research developments not only enables a greater understanding of autism but also has implications for prevention and remediation. © 2006.
Resumo:
To investigate the technical feasibility of a novel cooling system for commercial greenhouses, knowledge of the state of the art in greenhouse cooling is required. An extensive literature review was carried out that highlighted the physical processes of greenhouse cooling and showed the limitations of the conventional technology. The proposed cooling system utilises liquid desiccant technology; hence knowledge of liquid desiccant cooling is also a prerequisite before designing such a system. Extensive literature reviews on solar liquid desiccant regenerators and desiccators, which are essential parts of liquid desiccant cooling systems, were carried out to identify their advantages and disadvantages. In response to the findings, a regenerator and a desiccator were designed and constructed in lab. An important factor of liquid desiccant cooling is the choice of liquid desiccant itself. The hygroscopicity of the liquid desiccant affects the performance of the system. Bitterns, which are magnesium-rich brines derived from seawater, are proposed as an alternative liquid desiccant for cooling greenhouses. A thorough experimental and theoretical study was carried out in order to determine the properties of concentrated bitterns. It was concluded that their properties resemble pure magnesium chloride solutions. Therefore, magnesium chloride solution was used in laboratory experiments to assess the performance of the regenerator and the desiccator. To predict the whole system performance, the physical processes of heat and mass transfer were modelled using gPROMS® advanced process modelling software. The model was validated against the experimental results. Consequently it was used to model a commercials-scale greenhouse in several hot coastal areas in the tropics and sub-tropics. These case studies show that the system, when compared to evaporative cooling, achieves 3oC-5.6oC temperature drop inside the greenhouse in hot and humid places (RH>70%) and 2oC-4oC temperature drop in hot and dry places (50%
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
This thesis explores the innovative capacity of voluntary organizations in the field of the personal social services. It commences with a full literature review, which concludes that the wealth of research upon innovation in the organization studies field has not addressed this topic, whilst the specialist literatures upon voluntary organizations and upon the personal social services have neglected the study of innovation. The research contained in this thesis is intended to right this neglect and to integrate lessons from both fields. It combines a survey of the innovative activity of voluntary organizations in three localities with cross-sectional case studies of innovative, developmental and traditional organizations. The research concludes that innovation is an important, but not integral, characteristic of voluntary organizations. It develops a contingent model of this innovative capacity of voluntary organizations, which stresses the role of external environmental and institutional forces in shaping and releasing this capacity. It concludes by considering the contribution of this model both to organization studies and to the study of voluntary organizations.
Resumo:
Previous research has indicated that schematic eyes incorporating aspheric surfaces but lacking gradient index are unable to model ocular spherical aberration and peripheral astigmatism simultaneously. This limits their use as wide-angle schematic eyes. This thesis challenges this assumption by investigating the flexibility of schematic eyes comprising aspheric optical surfaces and homogeneous optical media. The full variation of ocular component dimensions found in human eyes was established from the literature. Schematic eye parameter variants were limited to these dimensions. The levels of spherical aberration and peripheral astigmatism modelled by these schematic eyes were compared to the range of measured levels. These were also established from the literature. To simplify comparison of modelled and measured data, single value parameters were introduced; the spherical aberration function (SAF), and peripheral astigmatism function (PAF). Some ocular components variations produced a wide range of aberrations without exceeding the limits of human ocular components. The effect of ocular component variations on coma was also investigated, but no comparison could be made as no empirical data exists. It was demonstrated that by combined manipulation of a number of parameters in the schematic eyes it was possible to model all levels of ocular spherical aberration and peripheral astigmatism. However, the unique parameters of a human eye could not be obtained in this way, as a number of models could be used to produce the same spherical aberration and peripheral astigmatism, while giving very different coma levels. It was concluded that these schematic eyes are flexible enough to model the monochromatic aberrations tested, the absence of gradient index being compensated for by altering the asphericity of one or more surfaces.
Resumo:
A methodology is presented which can be used to produce the level of electromagnetic interference, in the form of conducted and radiated emissions, from variable speed drives, the drive that was modelled being a Eurotherm 583 drive. The conducted emissions are predicted using an accurate circuit model of the drive and its associated equipment. The circuit model was constructed from a number of different areas, these being: the power electronics of the drive, the line impedance stabilising network used during the experimental work to measure the conducted emissions, a model of an induction motor assuming near zero load, an accurate model of the shielded cable which connected the drive to the motor, and finally the parasitic capacitances that were present in the drive modelled. The conducted emissions were predicted with an error of +/-6dB over the frequency range 150kHz to 16MHz, which compares well with the limits set in the standards which specify a frequency range of 150kHz to 30MHz. The conducted emissions model was also used to predict the current and voltage sources which were used to predict the radiated emissions from the drive. Two methods for the prediction of the radiated emissions from the drive were investigated, the first being two-dimensional finite element analysis and the second three-dimensional transmission line matrix modelling. The finite element model took account of the features of the drive that were considered to produce the majority of the radiation, these features being the switching of the IGBT's in the inverter, the shielded cable which connected the drive to the motor as well as some of the cables that were present in the drive.The model also took account of the structure of the test rig used to measure the radiated emissions. It was found that the majority of the radiation produced came from the shielded cable and the common mode currents that were flowing in the shield, and that it was feasible to model the radiation from the drive by only modelling the shielded cable. The radiated emissions were correctly predicted in the frequency range 30MHz to 200MHz with an error of +10dB/-6dB. The transmission line matrix method modelled the shielded cable which connected the drive to the motor and also took account of the architecture of the test rig. Only limited simulations were performed using the transmission line matrix model as it was found to be a very slow method and not an ideal solution to the problem. However the limited results obtained were comparable, to within 5%, to the results obtained using the finite element model.
Resumo:
It is conventional wisdom that collusion is more likely the fewer firms there are in a market and the more symmetric they are. This is often theoretically justified in terms of a repeated non-cooperative game. Although that model fits more easily with tacit than overt collusion, the impression sometimes given is that ‘one model fits all’. Moreover, the empirical literature offers few stylized facts on the most simple of questions—how few are few and how symmetric is symmetric? This paper attempts to fill this gap while also exploring the interface of tacit and overt collusion, albeit in an indirect way. First, it identifies the empirical model of tacit collusion that the European Commission appears to have employed in coordinated effects merger cases—apparently only fairly symmetric duopolies fit the bill. Second, it shows that, intriguingly, the same story emerges from the quite different experimental literature on tacit collusion. This offers a stark contrast with the findings for a sample of prosecuted cartels; on average, these involve six members (often more) and size asymmetries among members are often considerable. The indirect nature of this ‘evidence’ cautions against definitive conclusions; nevertheless, the contrast offers little comfort for those who believe that the same model does, more or less, fit all.