932 resultados para spatial information processing theories
Resumo:
Gaussian Processes provide good prior models for spatial data, but can be too smooth. In many physical situations there are discontinuities along bounding surfaces, for example fronts in near-surface wind fields. We describe a modelling method for such a constrained discontinuity and demonstrate how to infer the model parameters in wind fields with MCMC sampling.
Resumo:
We propose a novel all-optical signal processor for use at a return-to-zero receiver utilising loop mirror intensity filtering and nonlinear pulse broadening in normal dispersion fibre. The device offers reamplification and cleaning up of the optical signals, and phase margin improvement. The efficiency of the technique is demonstrated by application to 40 Gbit/s data transmission.
Resumo:
This thesis initially presents an 'assay' of the literature pertaining to individual differences in human-computer interaction. A series of experiments is then reported, designed to investigate the association between a variety of individual characteristics and various computer task and interface factors. Predictor variables included age, computer expertise, and psychometric tests of spatial visualisation, spatial memory, logical reasoning, associative memory, and verbal ability. These were studied in relation to a variety of computer-based tacks, including: (1) word processing and its component elements; (ii) the location of target words within passages of text; (iii) the navigation of networks and menus; (iv) command generation using menus and command line interfaces; (v) the search and selection of icons and text labels; (vi) information retrieval. A measure of self-report workload was also included in several of these experiments. The main experimental findings included: (i) an interaction between spatial ability and the manipulation of semantic but not spatial interface content; (ii) verbal ability being only predictive of certain task components of word processing; (iii) age differences in word processing and information retrieval speed but not accuracy; (iv) evidence of compensatory strategies being employed by older subjects; (v) evidence of performance strategy differences which disadvantaged high spatial subjects in conditions of low spatial information content; (vi) interactive effects of associative memory, expertise and command strategy; (vii) an association between logical reasoning and word processing but not information retrieval; (viii) an interaction between expertise and cognitive demand; and (ix) a stronger association between cognitive ability and novice performance than expert performance.
Resumo:
Modern managers are under tremendous pressure in attempting to fulfil a profoundly complex managerial task, that of handling information resources. Information management, an intricate process requiring a high measure of human cognition and discernment, involves matching a manager's lack of information processing capacity against his information needs, with voluminous information at his disposal. The nature of the task will undoubtedly become more complex in the case of a large organisation. Management of large-scale organisations is therefore an exceedingly challenging prospect for any manager to be faced with. A system that supports executive information needs will help reduce managerial and informational mismatches. In the context of the Malaysian public sector, the task of overall management lies with the Prime Minister and the Cabinet. The Prime Minister's Office is presently supporting the Prime Minister's information and managerial needs, although not without various shortcomings. The rigid formalised structure predominant of the Malaysian public sector, so opposed to dynamic treatment of problematic issues as faced by that sector, further escalates the managerial and organisational problem of coping with a state of complexity. The principal features of the research are twofold: the development of a methodology for diagnosing the problem organisation' and the design of an office system. The methodological development is done in the context of the Malaysian public sector, and aims at understanding the complexity of its communication and control situation. The outcome is a viable model of the public sector. `Design', on the other hand, is developing a syntax or language for office systems which provides an alternative to current views on office systems. The design is done with reference to, rather than for, the Prime Minister's Office. The desirable outcome will be an office model called Office Communication and Information System (OCIS).
Resumo:
Attention defines our mental ability to select and respond to stimuli, internal or external, on the basis of behavioural goals in the presence of competing, behaviourally irrelevant, stimuli. The frontal and parietal cortices are generally agreed to be involved with attentional processing, in what is termed the 'fronto-parietal' network. The left parietal cortex has been seen as the site for temporal attentional processing, whereas the right parietal cortex has been seen as the site for spatial attentional processing. There is much debate about when the modulation of the primary visual cortex occurs, whether it is modulated in the feedforward sweep of processing or modulated by feedback projections from extrastriate and higher cortical areas. MEG and psychophysical measurements were used to look at spatially selective covert attention. Dual-task and cue-based paradigms were used. It was found that the posterior parietal cortex (PPC), in particular the SPL and IPL, was the main site of activation during these experiments, and that the left parietal lobe was activated more strongly than the right parietal lobe throughout. The levels of activation in both parietal and occipital areas were modulated in accordance with attentional demands. It is likely that spatially selective covert attention is dominated by the left parietal lobe, and that this takes the form of the proposed sensory-perceptual lateralization within the parietal lobes. Another form of lateralization is proposed, termed the motor-processing lateralization, the side of dominance being determined by handedness, being reversed in left- relative to right-handers. In terms of the modulation of the primary visual cortex, it was found that it is unlikely that V1 is modulated initially; rather the modulation takes the form of feedback from higher extrastriate and parietal areas. This fits with the idea of preattentive visual processing, a commonly accepted idea which, in itself, prevents the concept of initial modulation of V1.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
This study was concerned with the computer automation of land evaluation. This is a broad subject with many issues to be resolved, so the study concentrated on three key problems: knowledge based programming; the integration of spatial information from remote sensing and other sources; and the inclusion of socio-economic information into the land evaluation analysis. Land evaluation and land use planning were considered in the context of overseas projects in the developing world. Knowledge based systems were found to provide significant advantages over conventional programming techniques for some aspects of the land evaluation process. Declarative languages, in particular Prolog, were ideally suited to integration of social information which changes with every situation. Rule-based expert system shells were also found to be suitable for this role, including knowledge acquisition at the interview stage. All the expert system shells examined suffered from very limited constraints to problem size, but new products now overcome this. Inductive expert system shells were useful as a guide to knowledge gaps and possible relationships, but the number of examples required was unrealistic for typical land use planning situations. The accuracy of classified satellite imagery was significantly enhanced by integrating spatial information on soil distribution for Thailand data. Estimates of the rice producing area were substantially improved (30% change in area) by the addition of soil information. Image processing work on Mozambique showed that satellite remote sensing was a useful tool in stratifying vegetation cover at provincial level to identify key development areas, but its full utility could not be realised on typical planning projects, without treatment as part of a complete spatial information system.
Resumo:
Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.
Resumo:
We propose a novel electroencephalographic application of a recently developed cerebral source extraction method (Functional Source Separation, FSS), which starts from extracranial signals and adds a functional constraint to the cost function of a basic independent component analysis model without requiring solutions to be independent. Five ad-hoc functional constraints were used to extract the activity reflecting the temporal sequence of sensory information processing along the somatosensory pathway in response to the separate left and right median nerve galvanic stimulation. Constraints required only the maximization of the responsiveness at specific latencies following sensory stimulation, without taking into account that any frequency or spatial information. After source extraction, the reliability of identified FS was assessed based on the position of single dipoles fitted on its retroprojected signals and on a discrepancy measure. The FS positions were consistent with previously reported data (two early subcortical sources localized in the brain stem and thalamus, the three later sources in cortical areas), leaving negligible residual activity at the corresponding latencies. The high-frequency component of the oscillatory activity (HFO) of the extracted component was analyzed. The integrity of the low amplitude HFOs was preserved for each FS. On the basis of our data, we suggest that FSS can be an effective tool to investigate the HFO behavior of the different neuronal pools, recruited at successive times after median nerve galvanic stimulation. As FSs are reconstructed along the entire experimental session, directional and dynamic HFO synchronization phenomena can be studied.
Resumo:
Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects
Resumo:
Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.
Resumo:
This paper presents a technique for building complex and adaptive meshes for urban and architectural design. The combination of a self-organizing map and cellular automata algorithms stands as a method for generating meshes otherwise static. This intends to be an auxiliary tool for the architect or the urban planner, improving control over large amounts of spatial information. The traditional grid employed as design aid is improved to become more general and flexible.