914 resultados para Visual-system Model
Resumo:
This thesis offers a methodology to study and design effective communication mechanisms in human activities. The methodology is focused in the management of complexity. It is argued that complexity is not something objective that can be worked out analytically, but something subjective that depends on the viewpoint. Also it is argued that while certain social contexts may inhibit, others may enhance the viewpoint's capabilities to deal with complexity. Certain organisation structures are more likely than others to allow individuals to release their potentials. Thus, the relevance of studying and designing effective organisations. The first part of the thesis offers a `cybernetic methodology' for problem solving in human activities, the second offers a `method' to study and design organisations. The cybernetics methodology discussed in this work is rooted in second order cybernetics, or the cybernetics of the observing systems (Von Foester 1979, Maturana and Varela 1980). Its main tenet is that the known properties of the real world reside in the individual and not in the world itself. This view, which puts emphasis in a, by nature, one sided and unilateral appreciation of reality, triggers the need for dialogue and conversations to construct it. The `method' to study and design organisations, it based on Beer's Viable System Model (Beer 1979, 1981, 1985). This model permits us to assess how successful is an organisation in coping with its environmental complexity, and, moreover, permits us to establish how to make more effective the responses to this complexity. These features of the model are of great significance in a world where complexity is perceived to be growing at an unthinkable pace. But, `seeing' these features of the model assumes an effective appreciation of organisational complexity; hence the need for the methodological discussions offered by the first part of the thesis.
Resumo:
Separate physiological mechanisms which respond to spatial and temporal stimulation have been identified in the visual system. Some pathological conditions may selectively affect these mechanisms, offering a unique opportunity to investigate how psychophysical and electrophysiological tests reflect these visual processes, and thus enhance the use of the tests in clinical diagnosis. Amblyopia and optical blur were studied, representing spatial visual defects of neural and optical origin, respectively. Selective defects of the visual pathways were also studied - optic neuritis which affects the optic nerve, and dementia of the Alzheimer type in which the higher association areas are believed to be affected, but the primary projections spared. Seventy control subjects from 10 to 79 years of age were investigated. This provided material for an additional study of the effect of age on the psychophysical and electrophysiological responses. Spatial processing was measured by visual acuity, the contrast sensitivity function, or spatial modulation transfer function (MTF), and the pattern reversal and pattern onset-offset visual evoked potential (VEP). Temporal, or luminance, processing was measured by the de Lange curve, or temporal MTF, and the flash VEP. The pattern VEP was shown to reflect the integrity of the optic nerve, geniculo striate pathway and primary projections, and was related to high temporal frequency processing. The individual components of the flash VEP differed in their characteristics. The results suggested that the P2 component reflects the function of the higher association areas and is related to low temporal frequency processing, while the Pl component reflects the primary projection areas. The combination of a delayed flash P2 component and a normal latency pattern VEP appears to be specific to dementia of the Alzheimer type and represents an important diagnostic test for this condition.
Resumo:
Although the role of ophthalmic factors in dyslexia remains the subject of controversy, recent research has indicated that the correlates of dyslexia may include binocular dysfunction, unstable motor ocular dominance, a deficit of the transient visual subsystem, and an anomaly that can be treated with tinted lenses. These features, typically, have been studied in isolation and their inter-relationship has received little attention. The aim of the present research was to investigate ophthalmic factors in dyslexia, with a particular emphasis on the interaction between optometric variables. Further aims were to establish the most appropriate investigative techniques for optometric practice and to explore the relationship between optometric and psychometric variables. A pilot study was used to refine the experimental design for a subsequent detailed study of 39 children with a specific reading disability and 43 good readers, who were selected from 240 children. The groups were matched for age, sex, and performance IQ. The following factors emerged as correlates of dyslexia: slight impaired visual acuity; reduced vergence amplitudes; increased vergence instability; decreased accommodative amplitude; poor peformance at tests that were designed to assess the function of the transient visual system; and slightly slower performance at a non-verbal simulated reading visual search task. The `transient system deficit', as measured by reduced flicker sensitivity, was significantly associated with decreased accommodative and vergence amplitudes. This links the motor and sensory visual correlates of dyslexia. Although the binocular dysfunction was correlated with increased symptoms, the difference in the groups' simulated reading visual search task performance was largely attributable to psychometric variables. The results suggest tht optometric problems may be a contributory factor in dyslexia, but are unlikely to play a key causative role. Several optometric variables were confounded by psychometric parameters, and this interaction should be a priority for future investigation.
Resumo:
Research in safety management has been inhibited by lack of consensus as to the definitions of the terms with which it is concerned and, in general, the lack of an agreed theoretical framework within which to collate and contrast empirical findings. This thesis sets out definitions of key terms (hazard, risk, accident, incident and safety) and provides a theoretical framework. This framework has been informed by many sources but especially the Management Oversight and Risk Tree (MORT), cybernetics and the Viable System Model (VSM). Fieldwork designs are proposed for the empirical development of an analytical framework and its use to assist study of the development of safety management in organisations.
Resumo:
The transmission of weak signals through the visual system is limited by internal noise. Its level can be estimated by adding external noise, which increases the variance within the detecting mechanism, causing masking. But experiments with white noise fail to meet three predictions: (a) noise has too small an influence on the slope of the psychometric function, (b) masking occurs even when the noise sample is identical in each two-alternative forced-choice (2AFC) interval, and (c) double-pass consistency is too low. We show that much of the energy of 2D white noise masks extends well beyond the pass-band of plausible detecting mechanisms and that this suppresses signal activity. These problems are avoided by restricting the external noise energy to the target mechanisms by introducing a pedestal with a mean contrast of 0% and independent contrast jitter in each 2AFC interval (termed zero-dimensional [0D] noise). We compared the jitter condition to masking from 2D white noise in double-pass masking and (novel) contrast matching experiments. Zero-dimensional noise produced the strongest masking, greatest double-pass consistency, and no suppression of perceived contrast, consistent with a noisy ideal observer. Deviations from this behavior for 2D white noise were explained by cross-channel suppression with no need to appeal to induced internal noise or uncertainty. We conclude that (a) results from previous experiments using white pixel noise should be re-evaluated and (b) 0D noise provides a cleaner method for investigating internal variability than pixel noise. Ironically then, the best external noise stimulus does not look noisy.
Resumo:
Richard Armstrong was educated at King’s College London (1968-1971) and subsequently at St. Catherine’s College Oxford (1972-1976). His early research involved the application of statistical methods to problems in botany and ecology. For the last 34 years, he has been a lecturer in Botany, Microbiology, Ecology, Neuroscience, and Optometry at the University of Aston. His current research interests include the application of quantitative methods to the study of neuropathology of neurodegenerative diseases with special reference to vision and the visual system.
Resumo:
Golfers, coaches and researchers alike, have all keyed in on golf putting as an important aspect of overall golf performance. Of the three principle putting tasks (green reading, alignment and the putting action phase), the putting action phase has attracted the most attention from coaches, players and researchers alike. This phase includes the alignment of the club with the ball, the swing, and ball contact. A significant amount of research in this area has focused on measuring golfer’s vision strategies with eye tracking equipment. Unfortunately this research suffers from a number of shortcomings, which limit its usefulness. The purpose of this thesis was to address some of these shortcomings. The primary objective of this thesis was to re-evaluate golfer’s putting vision strategies using binocular eye tracking equipment and to define a new, optimal putting vision strategy which was associated with both higher skill and success. In order to facilitate this research, bespoke computer software was developed and validated, and new gaze behaviour criteria were defined. Additionally, the effects of training (habitual) and competition conditions on the putting vision strategy were examined, as was the effect of ocular dominance. Finally, methods for improving golfer’s binocular vision strategies are discussed, and a clinical plan for the optometric management of the golfer’s vision is presented. The clinical management plan includes the correction of fundamental aspects of golfers’ vision, including monocular refractive errors and binocular vision defects, as well as enhancement of their putting vision strategy, with the overall aim of improving performance on the golf course. This research has been undertaken in order to gain a better understanding of the human visual system and how it relates to the sport performance of golfers specifically. Ultimately, the analysis techniques and methods developed are applicable to the assessment of visual performance in all sports.
Resumo:
Background - When a moving stimulus and a briefly flashed static stimulus are physically aligned in space the static stimulus is perceived as lagging behind the moving stimulus. This vastly replicated phenomenon is known as the Flash-Lag Effect (FLE). For the first time we employed biological motion as the moving stimulus, which is important for two reasons. Firstly, biological motion is processed by visual as well as somatosensory brain areas, which makes it a prime candidate for elucidating the interplay between the two systems with respect to the FLE. Secondly, discussions about the mechanisms of the FLE tend to recur to evolutionary arguments, while most studies employ highly artificial stimuli with constant velocities. Methodology/Principal Finding - Since biological motion is ecologically valid it follows complex patterns with changing velocity. We therefore compared biological to symbolic motion with the same acceleration profile. Our results with 16 observers revealed a qualitatively different pattern for biological compared to symbolic motion and this pattern was predicted by the characteristics of motor resonance: The amount of anticipatory processing of perceived actions based on the induced perspective and agency modulated the FLE. Conclusions/Significance - Our study provides first evidence for an FLE with non-linear motion in general and with biological motion in particular. Our results suggest that predictive coding within the sensorimotor system alone cannot explain the FLE. Our findings are compatible with visual prediction (Nijhawan, 2008) which assumes that extrapolated motion representations within the visual system generate the FLE. These representations are modulated by sudden visual input (e.g. offset signals) or by input from other systems (e.g. sensorimotor) that can boost or attenuate overshooting representations in accordance with biased neural competition (Desimone & Duncan, 1995).
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF’s local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Notre système visuel extrait d'ordinaire l'information en basses fréquences spatiales (FS) avant celles en hautes FS. L'information globale extraite tôt peut ainsi activer des hypothèses sur l'identité de l'objet et guider l'extraction d'information plus fine spécifique par la suite. Dans les troubles du spectre autistique (TSA), toutefois, la perception des FS est atypique. De plus, la perception des individus atteints de TSA semble être moins influencée par leurs a priori et connaissances antérieures. Dans l'étude décrite dans le corps de ce mémoire, nous avions pour but de vérifier si l'a priori de traiter l'information des basses aux hautes FS était présent chez les individus atteints de TSA. Nous avons comparé le décours temporel de l'utilisation des FS chez des sujets neurotypiques et atteints de TSA en échantillonnant aléatoirement et exhaustivement l'espace temps x FS. Les sujets neurotypiques extrayaient les basses FS avant les plus hautes: nous avons ainsi pu répliquer le résultat de plusieurs études antérieures, tout en le caractérisant avec plus de précision que jamais auparavant. Les sujets atteints de TSA, quant à eux, extrayaient toutes les FS utiles, basses et hautes, dès le début, indiquant qu'ils ne possédaient pas l'a priori présent chez les neurotypiques. Il semblerait ainsi que les individus atteints de TSA extraient les FS de manière purement ascendante, l'extraction n'étant pas guidée par l'activation d'hypothèses.
Resumo:
The police use both subjective (i.e. police staff) and automated (e.g. face recognition systems) methods for the completion of visual tasks (e.g person identification). Image quality for police tasks has been defined as the image usefulness, or image suitability of the visual material to satisfy a visual task. It is not necessarily affected by any artefact that may affect the visual image quality (i.e. decrease fidelity), as long as these artefacts do not affect the relevant useful information for the task. The capture of useful information will be affected by the unconstrained conditions commonly encountered by CCTV systems such as variations in illumination and high compression levels. The main aim of this thesis is to investigate aspects of image quality and video compression that may affect the completion of police visual tasks/applications with respect to CCTV imagery. This is accomplished by investigating 3 specific police areas/tasks utilising: 1) the human visual system (HVS) for a face recognition task, 2) automated face recognition systems, and 3) automated human detection systems. These systems (HVS and automated) were assessed with defined scene content properties, and video compression, i.e. H.264/MPEG-4 AVC. The performance of imaging systems/processes (e.g. subjective investigations, performance of compression algorithms) are affected by scene content properties. No other investigation has been identified that takes into consideration scene content properties to the same extend. Results have shown that the HVS is more sensitive to compression effects in comparison to the automated systems. In automated face recognition systems, `mixed lightness' scenes were the most affected and `low lightness' scenes were the least affected by compression. In contrast the HVS for the face recognition task, `low lightness' scenes were the most affected and `medium lightness' scenes the least affected. For the automated human detection systems, `close distance' and `run approach' are some of the most commonly affected scenes. Findings have the potential to broaden the methods used for testing imaging systems for security applications.