52 resultados para spatial information processing theories
Resumo:
In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.
Resumo:
One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.
Resumo:
Current force feedback, haptic interface devices are generally limited to the display of low frequency, high amplitude spatial data. A typical device consists of a low impedance framework of one or more degrees-of-freedom (dof), allowing a user to explore a pre-defined workspace via an end effector such as a handle, thimble, probe or stylus. The movement of the device is then constrained using high gain positional feedback, thus reducing the apparent dof of the device and conveying the illusion of hard contact to the user. Such devices are, however, limited to a narrow bandwidth of frequencies, typically below 30Hz, and are not well suited to the display of surface properties, such as object texture. This paper details a device to augment an existing force feedback haptic display with a vibrotactile display, thus providing a means of conveying low amplitude, high frequency spatial information of object surface properties. 1. Haptics and Haptic Interfaces Haptics is the study of human touch and interaction with the external environment via touch. Information from the human sense of touch can be classified in to two categories, cutaneous and kinesthetic. Cutaneous information is provided via the mechanoreceptive nerve endings in the glabrous skin of the human hand. It is primarily a means of relaying information regarding small-scale details in the form of skin stretch, compression and vibration.
Resumo:
The authors compare the performance of two types of controllers one based on the multilayered network and the other based on the single layered CMAC network (cerebellar model articulator controller). The neurons (information processing units) in the multi-layered network use Gaussian activation functions. The control scheme which is considered is a predictive control algorithm, along the lines used by Willis et al. (1991), Kambhampati and Warwick (1991). The process selected as a test bed is a continuous stirred tank reactor. The reaction taking place is an irreversible exothermic reaction in a constant volume reactor cooled by a single coolant stream. This reactor is a simplified version of the first tank in the two tank system given by Henson and Seborg (1989).
Resumo:
We investigated whether attention shifts and eye movement preparation are mediated by shared control mechanisms, as claimed by the premotor theory of attention. ERPs were recorded in three tasks where directional cues presented at the beginning of each trial instructed participants to direct their attention to the cued side without eye movements (Covert task), to prepare an eye movement in the cued direction without attention shifts (Saccade task) or both (Combined task). A peripheral visual Go/Nogo stimulus that was presented 800 ms after cue onset signalled whether responses had to be executed or withheld. Lateralised ERP components triggered during the cue–target interval, which are assumed to reflect preparatory control mechanisms that mediate attentional orienting, were very similar across tasks. They were also present in the Saccade task, which was designed to discourage any concomitant covert attention shifts. These results support the hypothesis that saccade preparation and attentional orienting are implemented by common control structures. There were however systematic differences in the impact of eye movement programming and covert attention on ERPs triggered in response to visual stimuli at cued versus uncued locations. It is concluded that, although the preparatory processes underlying saccade programming and covert attentional orienting may be based on common mechanisms, they nevertheless differ in their spatially specific effects on visual information processing.
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
Aggression in young people has been associated with a bias towards attributing hostile intent to others; however, little is known about the origin of biased social information processing. The current study explored the potential role of peer contagion in the emergence of hostile attribution in adolescents. 134 adolescents were assigned to one of two manipulated ‘chat-room’ conditions, where they believed they were communicating with online peers (e-confederates) who endorsed either hostile or benign intent attributions. Adolescents showed increased hostile attributions following exposure to hostile e-confederates and reduced hostility in the benign condition. Further analyses demonstrated that social anxiety was associated with a reduced tendency to take on hostile peer attitudes. Neither gender nor levels of aggression influenced individual susceptibility to peer influence, but aggressive adolescents reported greater affinity with hostile e-confederates.
Resumo:
We propose and analyse a class of evolving network models suitable for describing a dynamic topological structure. Applications include telecommunication, on-line social behaviour and information processing in neuroscience. We model the evolving network as a discrete time Markov chain, and study a very general framework where, conditioned on the current state, edges appear or disappear independently at the next timestep. We show how to exploit symmetries in the microscopic, localized rules in order to obtain conjugate classes of random graphs that simplify analysis and calibration of a model. Further, we develop a mean field theory for describing network evolution. For a simple but realistic scenario incorporating the triadic closure effect that has been empirically observed by social scientists (friends of friends tend to become friends), the mean field theory predicts bistable dynamics, and computational results confirm this prediction. We also discuss the calibration issue for a set of real cell phone data, and find support for a stratified model, where individuals are assigned to one of two distinct groups having different within-group and across-group dynamics.
Resumo:
We evaluate a number of real estate sentiment indices to ascertain current and forward-looking information content that may be useful for forecasting the demand and supply activities. Our focus lies on sector-specific surveys targeting the players from the supply-side of both residential and non-residential real estate markets. Analyzing the dynamic relationships within a Vector Auto-Regression (VAR) framework, we test the efficacy of these indices by comparing them with other coincident indicators in predicting real estate returns. Overall, our analysis suggests that sentiment indicators convey important information which should be embedded in the modeling exercise to predict real estate market returns. Generally, sentiment indices show better information content than broad economic indicators. The goodness of fit of our models is higher for the residential market than for the non-residential real estate sector. The impulse responses, in general, conform to our theoretical expectations. Variance decompositions and out-of-sample predictions generally show desired contribution and reasonable improvement respectively, thus upholding our hypothesis. Quite remarkably, consistent with the theory, the predictability swings when we look through different phases of the cycle. This perhaps suggests that, e.g. during recessions, market players’ expectations may be more accurate predictor of the future performances, conceivably indicating a ‘negative’ information processing bias and thus conforming to the precautionary motive of consumer behaviour.
Resumo:
The paper discusses ensemble behaviour in the Spiking Neuron Stochastic Diffusion Network, SNSDN, a novel network exploring biologically plausible information processing based on higher order temporal coding. SNSDN was proposed as an alternative solution to the binding problem [1]. SNSDN operation resembles Stochastic Diffusin on Search, SDS, a non-deterministic search algorithm able to rapidly locate the best instantiation of a target pattern within a noisy search space ([3], [5]). In SNSDN, relevant information is encoded in the length of interspike intervals. Although every neuron operates in its own time, ‘attention’ to a pattern in the search space results in self-synchronised activity of a large population of neurons. When multiple patterns are present in the search space, ‘switching of at- tention’ results in a change of the synchronous activity. The qualitative effect of attention on the synchronicity of spiking behaviour in both time and frequency domain will be discussed.
Resumo:
Background and objectives: Individuals who score high on positive schizotypy personality traits are vulnerable to more frequent trauma-related intrusive memories after a stressful event. This vulnerability may be the product of a low level of contextual integration of non-stressful material combined with a heightened sensitivity to a further reduction in contextual integration during a stressful event. The current study assessed whether high scoring schizotypes are vulnerable to frequent involuntary autobiographical memories (IAMs) of non-stressful material. Methods: A free-association word task was used. Participants completed three recorded trials which were then replayed to allow the identification of any associations where an involuntary autobiographical memory had come to mind. Self-report measures of schizotypy and anxiety were completed. Results: All participants retrieved at least one IAM from the three free-association word trials, with 70% experiencing two or more IAMs. Individuals scoring high in schizotypy reported more IAMs than those who scored low. Over 75% of the memories retrieved were neutral or positive in content. Limitations: The current study is an improvement on previous methodologies used to assess IAMs. However, bias due to retrospective recall remains a possibility. Conclusions: Individuals scoring high in schizotypy are vulnerable to an increased level of neutral intrusive memories which may be associated with a ‘baseline’ level of information-processing which is low in contextual integration.
Resumo:
Infections involving Salmonella enterica subsp. enterica serovars have serious animal and human health implications; causing gastroenteritis in humans and clinical symptoms, such as diarrhoea and abortion, in livestock. In this study an optical genetic mapping technique was used to screen 20 field isolate strains from four serovars implicated in disease outbreaks. The technique was able to distinguish between the serovars and the available sequenced strains and group them in agreement with similar data from microarrays and PFGE. The optical maps revealed variation in genome maps associated with antimicrobial resistance and prophage content in S. Typhimurium, and separated the S. Newport strains into two clear geographical lineages defined by the presence of prophage sequences. The technique was also able to detect novel insertions that may have had effects on the central metabolism of some strains. Overall optical mapping allowed a greater level of differentiation of genomic content and spatial information than more traditional typing methods.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.