693 resultados para VISUAL DETECTION
Resumo:
We investigated memories of room-sized spatial layouts learned by sequentially or simultaneously viewing objects from a stationary position. In three experiments, sequential viewing (one or two objects at a time) yielded subsequent memory performance that was equivalent or superior to simultaneous viewing of all objects, even though sequential viewing lacked direct access to the entire layout. This finding was replicated by replacing sequential viewing with directed viewing in which all objects were presented simultaneously and participants’ attention was externally focused on each object sequentially, indicating that the advantage of sequential viewing over simultaneous viewing may have originated from focal attention to individual object locations. These results suggest that memory representation of object-to-object relations can be constructed efficiently by encoding each object location separately, when those locations are defined within a single spatial reference system. These findings highlight the importance of considering object presentation procedures when studying spatial learning mechanisms.
Resumo:
Objects in an environment are often encountered sequentially during spatial learning, forming a path along which object locations are experienced. The present study investigated the effect of spatial information conveyed through the path in visual and proprioceptive learning of a room-sized spatial layout, exploring whether different modalities differentially depend on the integrity of the path. Learning object locations along a coherent path was compared with learning them in a spatially random manner. Path integrity had little effect on visual learning, whereas learning with the coherent path produced better memory performance than random order learning for proprioceptive learning. These results suggest that path information has differential effects in visual and proprioceptive spatial learning, perhaps due to a difference in the way one establishes a reference frame for representing relative locations of objects.
Resumo:
It has been shown that spatial information can be acquired from both visual and nonvisual modalities. The present study explored how spatial information from vision and proprioception was represented in memory, investigating orientation dependence of spatial memories acquired through visual and proprioceptive spatial learning. Experiment 1 examined whether visual learning alone and proprioceptive learning alone yielded orientation-dependent spatial memory. Results showed that spatial memories from both types of learning were orientation dependent. Experiment 2 explored how different orientations of the same environment were represented when they were learned visually and proprioceptively. Results showed that both visually and proprioceptively learned orientations were represented in spatial memory, suggesting that participants established two different reference systems based on each type of learning experience and interpreted the environment in terms of these two reference systems. The results provide some initial clues to how different modalities make unique contributions to spatial representations.
Resumo:
Sensing the mental, physical and emotional demand of a driving task is of primary importance in road safety research and for effectively designing in-vehicle information systems (IVIS). Particularly, the need of cars capable of sensing and reacting to the emotional state of the driver has been repeatedly advocated in the literature. Algorithms and sensors to identify patterns of human behavior, such as gestures, speech, eye gaze and facial expression, are becoming available by using low cost hardware: This paper presents a new system which uses surrogate measures such as facial expression (emotion) and head pose and movements (intention) to infer task difficulty in a driving situation. 11 drivers were recruited and observed in a simulated driving task that involved several pre-programmed events aimed at eliciting emotive reactions, such as being stuck behind slower vehicles, intersections and roundabouts, and potentially dangerous situations. The resulting system, combining face expressions and head pose classification, is capable of recognizing dangerous events (such as crashes and near misses) and stressful situations (e.g. intersections and way giving) that occur during the simulated drive.
Resumo:
Banana bunchy top disease (BBTD) caused by banana bunchy top virus (BBTV) was radioactively detected by nucleic acid hybridization techniques. Results showed that, 32P-labelled insert of pBT338 was hybridized with nucleic acid extracts from BBTV-infected plants from Egypt and Australia but not with those from CMV-infected plants from Egypt. Results revealed that BBTV was greatly detected in midrib, roots, meristem, corm, leaves and pseudostem respectively. BBTV was also detected in symptomless young plants prepared from diseased plant materials grown under tissue culture conditions but was not present in those performed from healthy plant materials. The sensitivity of dot blot and Southern blot hybridizations for the detection of BBTV was also performed for the detection of BBTV.
Resumo:
We present a proof of concept for a novel nanosensor for the detection of ultra-trace amounts of bio-active molecules in complex matrices. The nanosensor is comprised of gold nanoparticles with an ultra-thin silica shell and antibody surface attachment, which allows for the immobilization and direct detection of bio-active molecules by surface enhanced Raman spectroscopy (SERS) without requiring a Raman label. The ultra-thin passive layer (~1.3 nm thickness) prevents competing molecules from binding non-selectively to the gold surface without compromising the signal enhancement. The antibodies attached on the surface of the nanoparticles selectively bind to the target molecule with high affinity. The interaction between the nanosensor and the target analyte result in conformational rearrangements of the antibody binding sites, leading to significant changes in the surface enhanced Raman spectra of the nanoparticles when compared to the spectra of the un-reacted nanoparticles. Nanosensors of this design targeting the bio-active compounds erythropoietin and caffeine were able to detect ultra-trace amounts the analyte to the lower quantification limits of 3.5×10−13 M and 1×10−9 M, respectively.
Resumo:
We report a tunable alternating current electrohydrodynamic (ac-EHD) force which drives lateran fluid motion within a few nanometers of an electrode surface. Because the magnitude of this fluid shear force can be tuned externally (e.g., via the application of an ac electric field), it provides a new capability to physically displace weakly (nonspecifically) bound cellular analytes. To demonstrate the utility of the tunable nanoshearing phenomenon, we present data on purpose-built microfluidic devices that employ ac-EHD force to remove nonspecific adsorption of molecular and cellular species. Here, we show that an ac-EHD device containing asymmetric planar and microtip electrode pairs resulted in a 4-fold reduction in nonspecific adsorption of blood cells and also captured breast cancer cells in blood, with high efficiency (approximately 87%) and specificity. We therefore feel that this new capability of externally tuning and manipulating fluid flow could have wide applications as an innovative approach to enhance the specific capture of rare cells such as cancer cells in blood.
Resumo:
This project improved the detection and classification of very weakly expressed RhD variants in the Australian blood donor panel and contributed to the knowledge of anti-D reactivity patterns of RHD alleles that are undescribed. As such, the management of donations possessing these RHD alleles can be improved upon and the overall safety of transfusion medicine pertaining to the Rh blood group system will be increased. Future projects at ARCBS will be able to utilise the procedures developed in this project, thereby decreasing throughput time. The specificity of current testing will be improved and the need for outsourced RHD testing diminished.
Resumo:
Low-temperature plasmas in direct contact with arbitrary, written linear features on a Si wafer enable catalyst-free integration of carbon nanotubes into a Si-based nanodevice platform and in situ resolution of individual nucleation events. The graded nanotube arrays show reliable, reproducible, and competitive performance in electron field emission and biosensing nanodevices.
Resumo:
The quick detection of an abrupt unknown change in the conditional distribution of a dependent stochastic process has numerous applications. In this paper, we pose a minimax robust quickest change detection problem for cases where there is uncertainty about the post-change conditional distribution. Our minimax robust formulation is based on the popular Lorden criteria of optimal quickest change detection. Under a condition on the set of possible post-change distributions, we show that the widely known cumulative sum (CUSUM) rule is asymptotically minimax robust under our Lorden minimax robust formulation as a false alarm constraint becomes more strict. We also establish general asymptotic bounds on the detection delay of misspecified CUSUM rules (i.e. CUSUM rules that are designed with post- change distributions that differ from those of the observed sequence). We exploit these bounds to compare the delay performance of asymptotically minimax robust, asymptotically optimal, and other misspecified CUSUM rules. In simulation examples, we illustrate that asymptotically minimax robust CUSUM rules can provide better detection delay performance at greatly reduced computation effort compared to competing generalised likelihood ratio procedures.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Resumo:
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).