73 resultados para Human vision system
Resumo:
The timeline imposed by recent worldwide chemical legislation is not amenable to conventional in vivo toxicity testing, requiring the development of rapid, economical in vitro screening strategies which have acceptable predictive capacities. When acquiring regulatory neurotoxicity data, distinction on whether a toxic agent affects neurons and/or astrocytes is essential. This study evaluated neurofilament (NF) and glial fibrillary acidic protein (GFAP) directed single-cell (S-C) ELISA and flow cytometry as methods for distinguishing cell-specific cytoskeletal responses, using the established human NT2 neuronal/astrocytic (NT2.N/A) co-culture model and a range of neurotoxic (acrylamide, atropine, caffeine, chloroquine, nicotine) and non-neurotoxic (chloramphenicol, rifampicin, verapamil) test chemicals. NF and GFAP directed flow cytometry was able to identify several of the test chemicals as being specifically neurotoxic (chloroquine, nicotine) or astrocytoxic (atropine, chloramphenicol) via quantification of cell death in the NT2.N/A model at cytotoxic concentrations using the resazurin cytotoxicity assay. Those neurotoxicants with low associated cytotoxicity are the most significant in terms of potential hazard to the human nervous system. The NF and GFAP directed S-C ELISA data predominantly demonstrated the known neurotoxicants only to affect the neuronal and/or astrocytic cytoskeleton in the NT2.N/A cell model at concentrations below those affecting cell viability. This report concluded that NF and GFAP directed S-C ELISA and flow cytometric methods may prove to be valuable additions to an in vitro screening strategy for differentiating cytotoxicity from specific neuronal and/or astrocytic toxicity. Further work using the NT2.N/A model and a broader array of toxicants is appropriate in order to confirm the applicability of these methods.
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.
Resumo:
Using a hydraulic equipment manufacturing plant as the case study, this work explores the problems of systems integration in manufacturing systems design, stressing the behavioural aspects of motivation and participation, and the constraints involved in the proper consideration of the human sub-system. The need for a simple manageable modular organisation structure is illustrated, where it is shown, by reference to systems theory, how a business can be split into semi-autonomous operating units. The theme is the development of a manufacturing system based on an analysis of the business, its market, product, technology and constraints, coupled with a critical survey of modern management literature to develop an integrated systems design to suit a specific company in the current social environment. Society currently moves through a socio-technical revolution with man seeking higher levels of motivation. The transitory environment from an autocratic/paternalistic to a participative operating mode demands systems parameters only found to a limited extent in manufacturing systems today. It is claimed, that modern manufacturing systems design needs to be based on group working, job enrichment, delegation of decision making and reduced job monotony. The analysis shows how negative aspects of cellular manufacture such as lack of flexibility and poor fixed asset utilisation are relatively irrelevant and misleading in the broader context of the need to come to terms with the social stresses imposed on a company operating in the industrial environment of the present and the immediate future.
Resumo:
Because of attentional limitations, the human visual system can process for awareness and response only a fraction of the input received. Lesion and functional imaging studies have identified frontal, temporal, and parietal areas as playing a major role in the attentional control of visual processing, but very little is known about how these areas interact to form a dynamic attentional network. We hypothesized that the network communicates by means of neural phase synchronization, and we used magnetoencephalography to study transient long-range interarea phase coupling in a well studied attentionally taxing dual-target task (attentional blink). Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioral performance. Thus, we show how attentional limitations arise from the way in which the subsystems of the attentional network interact. The human brain faces an inestimable task of reducing a potentially overloading amount of input into a manageable flow of information that reflects both the current needs of the organism and the external demands placed on it. This task is accomplished via a ubiquitous construct known as “attention,” whose mechanism, although well characterized behaviorally, is far from understood at the neurophysiological level. Whereas attempts to identify particular neural structures involved in the operation of attention have met with considerable success (1-5) and have resulted in the identification of frontal, parietal, and temporal regions, far less is known about the interaction among these structures in a way that can account for the task-dependent successes and failures of attention. The goal of the present research was, thus, to unravel the means by which the subsystems making up the human attentional network communicate and to relate the temporal dynamics of their communication to observed attentional limitations in humans. A prime candidate for communication among distributed systems in the human brain is neural synchronization (for review, see ref. 6). Indeed, a number of studies provide converging evidence that long-range interarea communication is related to synchronized oscillatory activity (refs. 7-14; for review, see ref. 15). To determine whether neural synchronization plays a role in attentional control, we placed humans in an attentionally demanding task and used magnetoencephalography (MEG) to track interarea communication by means of neural synchronization. In particular, we presented 10 healthy subjects with two visual target letters embedded in streams of 13 distractor letters, appearing at a rate of seven per second. The targets were separated in time by a single distractor. This condition leads to the “attentional blink” (AB), a well studied dual-task phenomenon showing the reduced ability to report the second of two targets when an interval <500 ms separates them (16-18). Importantly, the AB does not prevent perceptual processing of missed target stimuli but only their conscious report (19), demonstrating the attentional nature of this effect and making it a good candidate for the purpose of our investigation. Although numerous studies have investigated factors, e.g., stimulus and timing parameters, that manipulate the magnitude of a particular AB outcome, few have sought to characterize the neural state under which “standard” AB parameters produce an inability to report the second target on some trials but not others. We hypothesized that the different attentional states leading to different behavioral outcomes (second target reported correctly or not) are characterized by specific patterns of transient long-range synchronization between brain areas involved in target processing. Showing the hypothesized correspondence between states of neural synchronization and human behavior in an attentional task entails two demonstrations. First, it needs to be demonstrated that cortical areas that are suspected to be involved in visual-attention tasks, and the AB in particular, interact by means of neural synchronization. This demonstration is particularly important because previous brain-imaging studies (e.g., ref. 5) only showed that the respective areas are active within a rather large time window in the same task and not that they are concurrently active and actually create an interactive network. Second, it needs to be demonstrated that the pattern of neural synchronization is sensitive to the behavioral outcome; specifically, the ability to correctly identify the second of two rapidly succeeding visual targets
Resumo:
A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.
Resumo:
Purpose: The Shin-Nippon SRW-5000 is an open view autorefractor that superseded the Canon R-1 autorefractor in the mid-1990s and has been used widely in optometry and vision science laboratories. It has been used to measure refractive error, accommodation responses both statically and dynamically, off-axis refractive error, and adapted to measure pupil size. This paper presents an overview of the original 2001 clinical evaluation of the SRW-5000 in adults (Mallen et al., Ophthal Physiol Opt 2001; 21: 101) and provides an update on the use and modification of the instrument since the original publication. Recent findings: The SRW-5000 instrument, and the family of devices which followed, have shown excellent validity, repeatability, and utility in clinical and research settings. The instruments have also shown great potential for increased research functionality following a number of modifications. Summary: The SRW-5000 and its derivatives have been, and continue to be, of significant importance in our drive to understand myopia progression, myopia control techniques, and oculomotor function in human vision.
Resumo:
A model system is presented using human umbilical vein endothelial cells (HUVECs) to investigate the role of homocysteine (Hcy) in atherosclerosis. HUVECs are shown to export Hcy at a rate determined by the flux through the methionine/Hcy pathway. Additional methionine increases intracellular methionine, decreases intracellular folate, and increases Hcy export, whereas additional folate inhibits export. An inverse relationship exists between intracellular folate and Hcy export. Hcy export may be regulated by intracellular S-adenosyl methionine rather than by Hcy. Human LDLs exposed to HUVECs exporting Hcy undergo time-related lipid oxidation, a process inhibited by the thiol trap dithionitrobenzoate. This is likely to be related to the generation of hydroxyl radicals, which we show are associated with Hcy export. Although Hcy is the major oxidant, cysteine also contributes, as shown by the effect of glutamate. Finally, the LDL oxidized in this system showed a time-dependent increase in uptake by human macrophages, implying an upregulation of the scavenger receptor. These results suggest that continuous export of Hcy from endothelial cells contributes to the generation of extracellular hydroxyl radicals, with associated oxidative modification of LDL and incorporation into macrophages, a key step in atherosclerosis. Factors that regulate intracellular Hcy metabolism modulate these effects. Copyright © 2005 by the American Society for Biochemistry and Molecular Biology, Inc.
Resumo:
We sought to determine the extent to which colour (and luminance) signals contribute towards the visuomotor localization of targets. To do so we exploited the movement-related illusory displacement a small stationary window undergoes when it has a continuously moving carrier grating behind it. We used drifting (1.0-4.2 Hz) red/green-modulated isoluminant gratings or yellow/black luminance-modulated gratings as carriers, each curtailed in space by a stationary, two-dimensional window. After each trial, the perceived location of the window was recorded with reference to an on-screen ruler (perceptual task) or the on-screen touch of a ballistic pointing movement made without visual feedback (visuomotor task). Our results showed that the perceptual displacement measures were similar for each stimulus type and weakly dependent on stimulus drift rate. However, while the visuomotor displacement measures were similar for each stimulus type at low drift rates (<4 Hz), they were significantly larger for luminance than colour stimuli at high drift rates (>4 Hz). We show that the latter cannot be attributed to differences in perceived speed between stimulus types. We assume, therefore, that our visuomotor localization judgements were more susceptible to the (carrier) motion of luminance patterns than colour patterns. We suggest that, far from being detrimental, this susceptibility may indicate the operation of mechanisms designed to counter the temporal asynchrony between perceptual experiences and the physical changes in the environment that give rise to them. We propose that perceptual localisation is equally supported by both colour and luminance signals but that visuomotor localisation is predominantly supported by luminance signals. We discuss the neural pathways that may be involved with visuomotor localization. © 2007 Springer-Verlag.
Resumo:
The need to measure the response of the oculomotor system, such as ocular accommodation, accurately and in real-world environments is essential. New instruments have been developed over the past 50 years to measure eye focus including the extensively utilised and well validated Canon R-1, but in general these have had limitations such as a closed field-of-view, a poor temporal resolution and the need for extensive instrumentation bulk preventing naturalistic performance of environmental tasks. The use of photoretinoscopy and more specifically the PowerRefractor was examined in this regard due to its remote nature, binocular measurement of accommodation, eye movement and pupil size and its open field-of-view. The accuracy of the PowerRefractor to measure refractive error was on averaging similar, but more variable than subjective refraction and previously validated instrumentation. The PowerRefractor was found to be tolerant to eye movements away from the visual axis, but could not function with small pupil sizes in brighter illumination. The PowerRefractor underestimated the lead of accommodation and overestimated the slope of the accommodation stimulus response curve. The PowerRefractor and the SRW-5000 were used to measure the oculomotor responses in a variety of real-world environment: spectacles compared to single vision contract lenses; the use of multifocal contact lenses by pre-presbyopes (relevant to studies on myopia retardation); and ‘accommodating’ intraocular lenses. Due to the accuracy concerns with the PowerRefractor, a purpose-built photoretinoscope was designed to measure the oculomotor response to a monocular head-mounted display. In conclusion, this thesis has shown the ability of photoretinoscopy to quantify changes in the oculomotor system. However there are some major limitations to the PowerRefractor, such as the need for individual calibration for accurate measures of accommodation and vergence, and the relatively large pupil size necessary for measurement.
Resumo:
The recording of visual acuity using the Snellen letter chart is only a limited measure of the visual performance of an eye wearing a refractive aid. Qualitative in addition to quantitative information is required to establish such a parameter: spatial, temporal and photometric aspects must all be incorporated into the test procedure. The literature relating to the correction of ametropia by refractive aids was reviewed. Selected aspects of a comparison between the correction provided by spectacles and contact lenses were considered. Special attention was directed to soft hydrophilic contact lenses. Despite technological advances which have produced physiologically acceptable soft lenses, there still remain associated with this recent form of refractive aid unpredictable visual factors. Several techniques for vision assessment were described, and previous studies of visual performance were discussed. To facilitate the investigation of visual performance in a clinical environment, a new semi-automated system was described: this utilized the presentation of broken ring test stimuli on a television screen. The research project comprised two stages. Initial work was concerned with the validation of the television system, including the optimization of its several operational variables. The second phase involved the utilization of the system in an investigation of visual performance aspects of the first month of regular daily soft contact lens wear by experimentally-naive subjects. On the basis of the results of this work an ‘homoeostatic’ model has been proposed to represent the strategy which an observer adopts in order to optimize his visual performance with soft contact lenses.
Resumo:
This thesis investigates various aspects of peripheral vision, which is known not to be as acute as vision at the point of fixation. Differences between foveal and peripheral vision are generally thought to be of a quantitative rather than a qualitative nature. However, the rate of decline in sensitivity between foveal and peripheral vision is known to be task dependent and the mechanisms underlying the differences are not yet well understood. Several experiments described here have employed a psychophysical technique referred to as 'spatial scaling'. Thresholds are determined at several eccentricities for ranges of stimuli which are magnified versions of one another. Using this methodology a parameter called the E2 value is determined, which defines the eccentricity at which stimulus size must double in order to maintain performance equivalent to that at the fovea. Experiments of this type have evaluated the eccentricity dependencies of detection tasks (kinetic and static presentation of a differential light stimulus), resolution tasks (bar orientation discrimination in the presence of flanking stimuli, word recognition and reading performance), and relative localisation tasks (curvature detection and discrimination). Most tasks could be made equal across the visual field by appropriate magnification. E2 values are found to vary widely dependent on the task, and possible reasons for such variations are discussed. The dependence of positional acuity thresholds on stimulus eccentricity, separation and spatial scale parameters is also examined. The relevance of each factor in producing 'Weber's law' for position can be determined from the results.
Resumo:
The observation that performance in many visual tasks can be made independent of eccentricity by increasing the size of peripheral stimuli according to the cortical magnification factor has dominated studies of peripheral vision for many years. However, it has become evident that the cortical magnification factor cannot be successfully applied to all tasks. To find out why, several tasks were studied using spatial scaling, a method which requires no pre-determined scaling factors (such as those predicted from cortical magnification) to magnify the stimulus at any eccentricity. Instead, thresholds are measured at the fovea and in the periphery using a series of stimuli, all of which are simply magnified versions of one another. Analysis of the data obtained in this way reveals the value of the parameter E2, the eccentricity at which foveal stimulus size must double in order to maintain performance equivalent to that at the fovea. The tasks investigated include hyperacuities (vernier acuity, bisection acuity, spatial interval discrimination, referenced displacement detection, and orientation discrimination), unreferenced instantaneous and gradual movement, flicker sensitivity, and face discrimination. In all cases tasks obeyed the principle of spatial scaling since performance in the periphery could be equated to that at the fovea by appropriate magnification. However, E2 values found for different spatial tasks varied over a 200-fold range. In spatial tasks (e.g. bisection acuity and spatial interval discrimination) E2 values were low, reaching about 0.075 deg, whereas in movement tasks the values could be as high as 16 deg. Using a method of spatial scaling it has been possible to equate foveal and peripheral perfonnance in many diverse visual tasks. The rate at which peripheral stimulus size had to be increased as a function of eccentricity was dependent upon the stimulus conditions and the task itself. Possible reasons for these findings are discussed.
Resumo:
A distinct feature of several recent models of contrast masking is that detecting mechanisms are divisively inhibited by a broadly tuned ‘gain pool’ of narrow-band spatial pattern mechanisms. The contrast gain control provided by this ‘cross-channel’ architecture achieves contrast normalisation of early pattern mechanisms, which is important for keeping them within the non-saturating part of their biological operating characteristic. These models superseded earlier ‘within-channel’ models, which had supposed that masking arose from direct stimulation of the detecting mechanism by the mask. To reveal the extent of masking, I measured the levels produced with large ranges of pattern spatial relationships that have not been explored before. Substantial interactions between channels tuned to different orientations and spatial frequencies were found. Differences in the masking levels produced with single and multiple component mask patterns provided insights into the summation rules within the gain pool. A widely used cross-channel masking model was tested on these data and was found to perform poorly. The model was developed and a version in which linear summation was allowed between all components within the gain pool but with the exception of the self-suppressing route typically provided the best account of the data. Subsequently, an adaptation paradigm was used to probe the processes underlying pooled responses in masking. This delivered less insight into the pooling than the other studies and areas were identified that require investigation for a new unifying model of masking and adaptation. In further experiments, levels of cross-channel masking were found to be greatly influenced by the spatio-temporal tuning of the channels involved. Old masking experiments and ideas relying on within-channel models were re-elevated in terms of contemporary cross-channel models (e.g. estimations of channel bandwidths from orientation masking functions) and this led to different conclusions than those originally arrived at. The investigation of effects with spatio-temporally superimposed patterns is focussed upon throughout this work, though it is shown how these enquiries might be extended to investigate effects across spatial and temporal position.