447 resultados para visual sensitivity
Resumo:
This research investigated the prevalence of vision disorders in Queensland Indigenous primary school children, creating the first comprehensive visual profile of Indigenous children. Findings showed reduced convergence ability and reduced visual information processing skills were more common in Indigenous compared to non-Indigenous children. Reduced visual information processing skills were also associated with reduced reading outcomes in both groups of children. As early detection of visual disorders is important, the research also reviewed the delivery of screening programs across Queensland and proposed a model for improved coordination and service delivery of vision screening to Queensland school children.
Resumo:
When radiation therapy centres are equipped with two or more linear accelerators from the same vendor, they are usually beam-matched. This work tested the sensitivity of optically stimulated luminescence dosimeters (OSLDs) across matched linear accelerators. The responses were compared with an unshielded diode detector for varying field sizes. Clinical studies are currently done with thermoluminescent dosimeters (TLD), which absorb radiation then emit some levels of light determined by the radiation absorption when heated.
Resumo:
This study was undertaken to investigate any relationship between sensory features and neck pain in female office workers using quantitative sensory measures to better understand neck pain in this group. Office workers who used a visual display monitor for more than four hours per day with varying levels of neck pain and disability were eligible for inclusion. There were 85 participants categorized according to their scores on the neck disability index (NDI): 33 with no pain (NDI < 8); 38 with mild levels of pain and disability (NDI 9–29); 14 with moderate levels of pain (NDI ⩾ 30). A fourth group of women without neck pain (n = 22) who did not work formed the control group. Measures included: thermal pain thresholds over the posterior cervical spine; pressure pain thresholds over the posterior neck, trapezius, levator scapulae and tibialis anterior muscles, and the median nerve trunk; sensitivity to vibrotactile stimulus over areas of the hand innervated by the median, ulnar and radial nerves; sympathetic vasoconstrictor response. All tests were conducted bilaterally. ANCOVA models were used to determine group differences between the means for each sensory measure. Office workers with greater self-reported neck pain demonstrated hyperalgesia to thermal stimuli over the neck, hyperalgesia to pressure stimulation over several sites tested; hypoaesthesia to vibration stimulation but no changes in the sympathetic vasoconstrictor response. There is evidence of multiple peripheral nerve dysfunction with widespread sensitivity most likely due to altered central nociceptive processing initiated and sustained by nociceptive input from the periphery.
Resumo:
This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.
Resumo:
The development of resistance to the antiestrogen tamoxifen occurs in a high percentage of initially responsive patients. We have developed a new model in which to investigate acquired resistance to triphenylethylenes. A stepwise in vitro selection of the hormone-independent human breast cancer variant MCF-7/LCC1 against 4-hydroxytamoxifen produced a stable resistant population designated MCF7/LCC2. MCF7/LCC2 cells retain levels of estrogen receptor expression comparable to the parental MCF7/LCC1 and MCF-7 cells. Progesterone receptor expression remains estrogen inducible in MCF7/LCC2 cells, although to levels significantly lower than observed in MCF-7 and MCF7/LCC1 cells. MCF7/ LCC2 cells form tumors in ovariectomized nude mice without estrogen supplementation, and these tumors are tamoxifen resistant but can be tstrogen stimulated. Significantly, MCF7/LCC2 cells have retained sensitivity to the steroidal antiestrogen ICI 182,780. These data suggest that some breast cancer patients who acquire resistance to tamoxifen may not develop cross-resistance to treatment with steroidal antiestrogens.
Resumo:
This paper presents Sequence Matching Across Route Traversals (SMART); a generally applicable sequence-based place recognition algorithm. SMART provides invariance to changes in illumination and vehicle speed while also providing moderate pose invariance and robustness to environmental aliasing. We evaluate SMART on vehicles travelling at highly variable speeds in two challenging environments; firstly, on an all-terrain vehicle in an off-road, forest track and secondly, using a passenger car traversing an urban environment across day and night. We provide comparative results to the current state-of-the-art SeqSLAM algorithm and investigate the effects of altering SMART’s image matching parameters. Additionally, we conduct an extensive study of the relationship between image sequence length and SMART’s matching performance. Our results show viable place recognition performance in both environments with short 10-metre sequences, and up to 96% recall at 100% precision across extreme day-night cycles when longer image sequences are used.
Resumo:
The focus of this research is the creation of a stage-directing training manual on the researcher's site at the National Institute of Dramatic Art. The directing procedures build on the work of Stanislavski's Active Analysis and findings from present-day visual cognition studies. Action research methodology and evidence-based data collection are employed to improve the efficacy of both the directing procedures and the pedagogical manual. The manual serves as a supplement to director training and a toolkit for the more experienced practitioner. The manual and research findings provide a unique and innovative contribution to the field of theatre directing.
Resumo:
We investigated memories of room-sized spatial layouts learned by sequentially or simultaneously viewing objects from a stationary position. In three experiments, sequential viewing (one or two objects at a time) yielded subsequent memory performance that was equivalent or superior to simultaneous viewing of all objects, even though sequential viewing lacked direct access to the entire layout. This finding was replicated by replacing sequential viewing with directed viewing in which all objects were presented simultaneously and participants’ attention was externally focused on each object sequentially, indicating that the advantage of sequential viewing over simultaneous viewing may have originated from focal attention to individual object locations. These results suggest that memory representation of object-to-object relations can be constructed efficiently by encoding each object location separately, when those locations are defined within a single spatial reference system. These findings highlight the importance of considering object presentation procedures when studying spatial learning mechanisms.
Resumo:
Objects in an environment are often encountered sequentially during spatial learning, forming a path along which object locations are experienced. The present study investigated the effect of spatial information conveyed through the path in visual and proprioceptive learning of a room-sized spatial layout, exploring whether different modalities differentially depend on the integrity of the path. Learning object locations along a coherent path was compared with learning them in a spatially random manner. Path integrity had little effect on visual learning, whereas learning with the coherent path produced better memory performance than random order learning for proprioceptive learning. These results suggest that path information has differential effects in visual and proprioceptive spatial learning, perhaps due to a difference in the way one establishes a reference frame for representing relative locations of objects.
Resumo:
It has been shown that spatial information can be acquired from both visual and nonvisual modalities. The present study explored how spatial information from vision and proprioception was represented in memory, investigating orientation dependence of spatial memories acquired through visual and proprioceptive spatial learning. Experiment 1 examined whether visual learning alone and proprioceptive learning alone yielded orientation-dependent spatial memory. Results showed that spatial memories from both types of learning were orientation dependent. Experiment 2 explored how different orientations of the same environment were represented when they were learned visually and proprioceptively. Results showed that both visually and proprioceptively learned orientations were represented in spatial memory, suggesting that participants established two different reference systems based on each type of learning experience and interpreted the environment in terms of these two reference systems. The results provide some initial clues to how different modalities make unique contributions to spatial representations.
Resumo:
Sensing the mental, physical and emotional demand of a driving task is of primary importance in road safety research and for effectively designing in-vehicle information systems (IVIS). Particularly, the need of cars capable of sensing and reacting to the emotional state of the driver has been repeatedly advocated in the literature. Algorithms and sensors to identify patterns of human behavior, such as gestures, speech, eye gaze and facial expression, are becoming available by using low cost hardware: This paper presents a new system which uses surrogate measures such as facial expression (emotion) and head pose and movements (intention) to infer task difficulty in a driving situation. 11 drivers were recruited and observed in a simulated driving task that involved several pre-programmed events aimed at eliciting emotive reactions, such as being stuck behind slower vehicles, intersections and roundabouts, and potentially dangerous situations. The resulting system, combining face expressions and head pose classification, is capable of recognizing dangerous events (such as crashes and near misses) and stressful situations (e.g. intersections and way giving) that occur during the simulated drive.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Resumo:
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.