942 resultados para categorization IT PFC computational neuroscience model HMAX
Resumo:
Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.
Resumo:
We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.
Resumo:
This article examines whether UK portfolio returns are time varying so that expected returns follow an AR(1) process as proposed by Conrad and Kaul for the USA. It explores this hypothesis for four portfolios that have been formed on the basis of market capitalization. The portfolio returns are modelled using a kalman filter signal extraction model in which the unobservable expected return is the state variable and is allowed to evolve as a stationary first order autoregressive process. It finds that this model is a good representation of returns and can account for most of the autocorrelation present in observed portfolio returns. This study concludes that UK portfolio returns are time varying and the nature of the time variation appears to introduce a substantial amount of autocorrelation to portfolio returns. Like Conrad and Kaul if finds a link between the extent to which portfolio returns are time varying and the size of firms within a portfolio but not the monotonic one found for the USA.
Resumo:
In 2002, we published a paper [Brock, J., Brown, C., Boucher, J., Rippon, G., 2002. The temporal binding deficit hypothesis of autism. Development and Psychopathology 142, 209-224] highlighting the parallels between the psychological model of 'central coherence' in information processing [Frith, U., 1989. Autism: Explaining the Enigma. Blackwell, Oxford] and the neuroscience model of neural integration or 'temporal binding'. We proposed that autism is associated with abnormalities of information integration that is caused by a reduction in the connectivity between specialised local neural networks in the brain and possible overconnectivity within the isolated individual neural assemblies. The current paper updates this model, providing a summary of theoretical and empirical advances in research implicating disordered connectivity in autism. This is in the context of changes in the approach to the core psychological deficits in autism, of greater emphasis on 'interactive specialisation' and the resultant stress on early and/or low-level deficits and their cascading effects on the developing brain [Johnson, M.H., Halit, H., Grice, S.J., Karmiloff-Smith, A., 2002. Neuroimaging of typical and atypical development: a perspective from multiple levels of analysis. Development and Psychopathology 14, 521-536].We also highlight recent developments in the measurement and modelling of connectivity, particularly in the emerging ability to track the temporal dynamics of the brain using electroencephalography (EEG) and magnetoencephalography (MEG) and to investigate the signal characteristics of this activity. This advance could be particularly pertinent in testing an emerging model of effective connectivity based on the balance between excitatory and inhibitory cortical activity [Rubenstein, J.L., Merzenich M.M., 2003. Model of autism: increased ratio of excitation/inhibition in key neural systems. Genes, Brain and Behavior 2, 255-267; Brown, C., Gruber, T., Rippon, G., Brock, J., Boucher, J., 2005. Gamma abnormalities during perception of illusory figures in autism. Cortex 41, 364-376]. Finally, we note that the consequence of this convergence of research developments not only enables a greater understanding of autism but also has implications for prevention and remediation. © 2006.
Resumo:
To investigate the technical feasibility of a novel cooling system for commercial greenhouses, knowledge of the state of the art in greenhouse cooling is required. An extensive literature review was carried out that highlighted the physical processes of greenhouse cooling and showed the limitations of the conventional technology. The proposed cooling system utilises liquid desiccant technology; hence knowledge of liquid desiccant cooling is also a prerequisite before designing such a system. Extensive literature reviews on solar liquid desiccant regenerators and desiccators, which are essential parts of liquid desiccant cooling systems, were carried out to identify their advantages and disadvantages. In response to the findings, a regenerator and a desiccator were designed and constructed in lab. An important factor of liquid desiccant cooling is the choice of liquid desiccant itself. The hygroscopicity of the liquid desiccant affects the performance of the system. Bitterns, which are magnesium-rich brines derived from seawater, are proposed as an alternative liquid desiccant for cooling greenhouses. A thorough experimental and theoretical study was carried out in order to determine the properties of concentrated bitterns. It was concluded that their properties resemble pure magnesium chloride solutions. Therefore, magnesium chloride solution was used in laboratory experiments to assess the performance of the regenerator and the desiccator. To predict the whole system performance, the physical processes of heat and mass transfer were modelled using gPROMS® advanced process modelling software. The model was validated against the experimental results. Consequently it was used to model a commercials-scale greenhouse in several hot coastal areas in the tropics and sub-tropics. These case studies show that the system, when compared to evaporative cooling, achieves 3oC-5.6oC temperature drop inside the greenhouse in hot and humid places (RH>70%) and 2oC-4oC temperature drop in hot and dry places (50%
Resumo:
Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
This thesis explores the innovative capacity of voluntary organizations in the field of the personal social services. It commences with a full literature review, which concludes that the wealth of research upon innovation in the organization studies field has not addressed this topic, whilst the specialist literatures upon voluntary organizations and upon the personal social services have neglected the study of innovation. The research contained in this thesis is intended to right this neglect and to integrate lessons from both fields. It combines a survey of the innovative activity of voluntary organizations in three localities with cross-sectional case studies of innovative, developmental and traditional organizations. The research concludes that innovation is an important, but not integral, characteristic of voluntary organizations. It develops a contingent model of this innovative capacity of voluntary organizations, which stresses the role of external environmental and institutional forces in shaping and releasing this capacity. It concludes by considering the contribution of this model both to organization studies and to the study of voluntary organizations.
Resumo:
Previous research has indicated that schematic eyes incorporating aspheric surfaces but lacking gradient index are unable to model ocular spherical aberration and peripheral astigmatism simultaneously. This limits their use as wide-angle schematic eyes. This thesis challenges this assumption by investigating the flexibility of schematic eyes comprising aspheric optical surfaces and homogeneous optical media. The full variation of ocular component dimensions found in human eyes was established from the literature. Schematic eye parameter variants were limited to these dimensions. The levels of spherical aberration and peripheral astigmatism modelled by these schematic eyes were compared to the range of measured levels. These were also established from the literature. To simplify comparison of modelled and measured data, single value parameters were introduced; the spherical aberration function (SAF), and peripheral astigmatism function (PAF). Some ocular components variations produced a wide range of aberrations without exceeding the limits of human ocular components. The effect of ocular component variations on coma was also investigated, but no comparison could be made as no empirical data exists. It was demonstrated that by combined manipulation of a number of parameters in the schematic eyes it was possible to model all levels of ocular spherical aberration and peripheral astigmatism. However, the unique parameters of a human eye could not be obtained in this way, as a number of models could be used to produce the same spherical aberration and peripheral astigmatism, while giving very different coma levels. It was concluded that these schematic eyes are flexible enough to model the monochromatic aberrations tested, the absence of gradient index being compensated for by altering the asphericity of one or more surfaces.
Resumo:
A methodology is presented which can be used to produce the level of electromagnetic interference, in the form of conducted and radiated emissions, from variable speed drives, the drive that was modelled being a Eurotherm 583 drive. The conducted emissions are predicted using an accurate circuit model of the drive and its associated equipment. The circuit model was constructed from a number of different areas, these being: the power electronics of the drive, the line impedance stabilising network used during the experimental work to measure the conducted emissions, a model of an induction motor assuming near zero load, an accurate model of the shielded cable which connected the drive to the motor, and finally the parasitic capacitances that were present in the drive modelled. The conducted emissions were predicted with an error of +/-6dB over the frequency range 150kHz to 16MHz, which compares well with the limits set in the standards which specify a frequency range of 150kHz to 30MHz. The conducted emissions model was also used to predict the current and voltage sources which were used to predict the radiated emissions from the drive. Two methods for the prediction of the radiated emissions from the drive were investigated, the first being two-dimensional finite element analysis and the second three-dimensional transmission line matrix modelling. The finite element model took account of the features of the drive that were considered to produce the majority of the radiation, these features being the switching of the IGBT's in the inverter, the shielded cable which connected the drive to the motor as well as some of the cables that were present in the drive.The model also took account of the structure of the test rig used to measure the radiated emissions. It was found that the majority of the radiation produced came from the shielded cable and the common mode currents that were flowing in the shield, and that it was feasible to model the radiation from the drive by only modelling the shielded cable. The radiated emissions were correctly predicted in the frequency range 30MHz to 200MHz with an error of +10dB/-6dB. The transmission line matrix method modelled the shielded cable which connected the drive to the motor and also took account of the architecture of the test rig. Only limited simulations were performed using the transmission line matrix model as it was found to be a very slow method and not an ideal solution to the problem. However the limited results obtained were comparable, to within 5%, to the results obtained using the finite element model.
Resumo:
It is conventional wisdom that collusion is more likely the fewer firms there are in a market and the more symmetric they are. This is often theoretically justified in terms of a repeated non-cooperative game. Although that model fits more easily with tacit than overt collusion, the impression sometimes given is that ‘one model fits all’. Moreover, the empirical literature offers few stylized facts on the most simple of questions—how few are few and how symmetric is symmetric? This paper attempts to fill this gap while also exploring the interface of tacit and overt collusion, albeit in an indirect way. First, it identifies the empirical model of tacit collusion that the European Commission appears to have employed in coordinated effects merger cases—apparently only fairly symmetric duopolies fit the bill. Second, it shows that, intriguingly, the same story emerges from the quite different experimental literature on tacit collusion. This offers a stark contrast with the findings for a sample of prosecuted cartels; on average, these involve six members (often more) and size asymmetries among members are often considerable. The indirect nature of this ‘evidence’ cautions against definitive conclusions; nevertheless, the contrast offers little comfort for those who believe that the same model does, more or less, fit all.
Resumo:
This article examines whether UK portfolio returns are time varying so that expected returns follow an AR(1) process as proposed by Conrad and Kaul for the USA. It explores this hypothesis for four portfolios that have been formed on the basis of market capitalization. The portfolio returns are modelled using a kalman filter signal extraction model in which the unobservable expected return is the state variable and is allowed to evolve as a stationary first order autoregressive process. It finds that this model is a good representation of returns and can account for most of the autocorrelation present in observed portfolio returns. This study concludes that UK portfolio returns are time varying and the nature of the time variation appears to introduce a substantial amount of autocorrelation to portfolio returns. Like Conrad and Kaul if finds a link between the extent to which portfolio returns are time varying and the size of firms within a portfolio but not the monotonic one found for the USA. © 2004 Taylor and Francis Ltd.
Resumo:
This dissertation examines whether-there exists financial constraints and, if so, their implications for investment in research and development expenditures. It develops a theoretical model of credit rationing and research and development in which both are determined simultaneously and endogenously. The model provides a useful tool to examine different policies that may help alleviate the negative the effect of financial constraints faced by firms.^ The empirical evidence presented deals with two different cases, namely, the motor vehicle industry in Germany (1970-1990) and the electrical machinery industry In Spain (1975-1990).^ The innovation in the empirical analysis is that it follows a novel approach to identify events that allow us to isolate the effect of financial constraints in the determination of research and development.^ Further, empirical evidence is presented to show that in the above two cases financial constraints affect investment in physical capital as well.^ The empirical evidence presented supports the results of the theoretical model developed in this dissertation, showing that financial constraints negatively affect the rate of growth of innovation by reducing the intensity of research and development activity. ^
Resumo:
Increased pressure to control costs and increased competition has prompted health care managers to look for tools to effectively operate their institutions. This research sought a framework for the development of a Simulation-Based Decision Support System (SB-DSS) to evaluate operating policies. A prototype of this SB-DSS was developed. It incorporates a simulation model that uses real or simulated data. ER decisions have been categorized and, for each one, an implementation plan has been devised. Several issues of integrating heterogeneous tools have been addressed. The prototype revealed that simulation can truly be used in this environment in a timely fashion because the simulation model has been complemented with a series of decision-making routines. These routines use a hierarchical approach to organize the various scenarios under which the model may run and to partially reconfigure the ARENA model at run time. Hence, the SB-DSS tailors its responses to each node in the hierarchy.
Resumo:
This study examined the influence of the tourism destination image as well as satisfaction and motivation in the intention of engaging in a positive electronic word of mouth (eWOM) by tourists through Facebook. In addition, it was also specifically expected to assess the sociodemographic profile and frequency of eWOM publications from those who answered the questions; it also assessed the adequacy of the manifested variables for composition of the following dimensions: Quality, Satisfaction, Image, Motivations and Positive Electronic Word of Mouth (eWOM). And finally, it analyzed a relational model where there are relationships between Quality, Satisfaction, Image and Motivations in the explanation of engagement in the Positive Electronic Word of Mouth (eWOM). With this aim it was conducted a study, based on a hypothetical-deductive logic, which was descriptive in relation to its goals. The analytical approach was quantitative (a survey). The sampling procedure was non-probabilistic, by the convenience method of sampling specifically, having the choice of the subject been made through the probabilistic systematic method, and using time as a factor of systematization in an attempt of making randomly the selection of the interviewed people. The study sample consisted of 355 tourists. The used instrument to collect information was the structured questionnaire whose answers were collected in the main points of entry, exit and rides of tourists on the Pipa’s Beach/RN. Data analysis was carried out using descriptive and multivariate statistics, mainly exploratory and confirmatory factor analysis and structural equation modeling. Among the main results, it was possible to confirm that the Motivations, Satisfaction and Image strongly affect the intention of engaging in positive electronic word of mouth (eWOM). Emphasis is given to the motivations, as they demonstrate bigger impact in explaining the dependent variable; they are followed by the satisfaction and the image. The latter, however, is inversely proportional. Among the motivations, the one with the highest percentage of variance were the social benefits sought by tourists; and presenting the same percentage appears the desire to help other tourists and to vent Positive Emotions. The manifested variables demonstrate to be fully acceptable to be taken as reflexes of their respective factors.