40 resultados para Cooperating objects
Resumo:
The dual-stream model of auditory processing postulates separate processing streams for sound meaning and for sound location. The present review draws on evidence from human behavioral and activation studies as well as from lesion studies to argue for a position-linked representation of sound objects that is distinct both from the position-independent representation within the ventral/What stream and from the explicit sound localization processing within the dorsal/Where stream.
Resumo:
The purpose of this paper is to review the scientific literature from August 2007 to July 2010. The review is focused on more than 420 published papers. The review will not cover information coming from international meetings available only in abstract form. Fingermarks constitute an important chapter with coverage of the identification process as well as detection techniques on various surfaces. We note that the research has been very dense both at exploring and understanding current detection methods as well as bringing groundbreaking techniques to increase the number of marks detected from various objects. The recent report from the US National Research Council (NRC) is a milestone that has promoted a critical discussion on the state of forensic science and its associated research. We can expect a surge of interest in research in relation to cognitive aspect of mark and print comparison, establishment of relevant forensic error rates and statistical modelling of the selectivity of marks' attributes. Other biometric means of forensic identification such as footmarks or earmarks are also covered in the report. Compared to previous years, we noted a decrease in the number of submission in these areas. No doubt that the NRC report has set the seed for further investigation of these fields as well.
Resumo:
Gestures are the first forms of conventional communication that young children develop in order to intentionally convey a specific message. However, at first, infants rarely communicate successfully with their gestures, prompting caregivers to interpret them. Although the role of caregivers in early communication development has been examined, little is known about how caregivers attribute a specific communicative function to infants' gestures. In this study, we argue that caregivers rely on the knowledge about the referent that is shared with infants in order to interpret what communicative function infants wish to convey with their gestures. We videotaped interactions from six caregiver-infant dyads playing with toys when infants were 8, 10, 12, 14, and 16 months old. We coded infants' gesture production and we determined whether caregivers interpreted those gestures as conveying a clear communicative function or not; we also coded whether infants used objects according to their conventions of use as a measure of shared knowledge about the referent. Results revealed an association between infants' increasing knowledge of object use and maternal interpretations of infants' gestures as conveying a clear communicative function. Our findings emphasize the importance of shared knowledge in shaping infants' emergent communicative skills.
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.
Resumo:
Normal visual perception requires differentiating foreground from background objects. Differences in physical attributes sometimes determine this relationship. Often such differences must instead be inferred, as when two objects or their parts have the same luminance. Modal completion refers to such perceptual "filling-in" of object borders that are accompanied by concurrent brightness enhancement, in turn termed illusory contours (ICs). Amodal completion is filling-in without concurrent brightness enhancement. Presently there are controversies regarding whether both completion processes use a common neural mechanism and whether perceptual filling-in is a bottom-up, feedforward process initiating at the lowest levels of the cortical visual pathway or commences at higher-tier regions. We previously examined modal completion (Murray et al., 2002) and provided evidence that the earliest modal IC sensitivity occurs within higher-tier object recognition areas of the lateral occipital complex (LOC). We further proposed that previous observations of IC sensitivity in lower-tier regions likely reflect feedback modulation from the LOC. The present study tested these proposals, examining the commonality between modal and amodal completion mechanisms with high-density electrical mapping, spatiotemporal topographic analyses, and the local autoregressive average distributed linear inverse source estimation. A common initial mechanism for both types of completion processes (140 msec) that manifested as a modulation in response strength within higher-tier visual areas, including the LOC and parietal structures, is demonstrated, whereas differential mechanisms were evident only at a subsequent time period (240 msec), with amodal completion relying on continued strong responses in these structures.
Resumo:
Knockout mice lacking the alpha-1b adrenergic receptor were tested in behavioral experiments. Reaction to novelty was first assessed in a simple test in which the time taken by the knockout mice and their littermate controls to enter a second compartment was compared. Then the mice were tested in an open field to which unknown objects were subsequently added. Special novelty was introduced by moving one of the familiar objects to another location in the open field. Spatial behavior and memory were further studied in a homing board test, and in the water maze. The alpha-1b knockout mice showed an enhanced reactivity to new situations. They were faster to enter the new environment, covered longer paths in the open field, and spent more time exploring the new objects. They reacted like controls to modification inducing spatial novelty. In the homing board test, both the knockout mice and the control mice seemed to use a combination of distant visual and proximal olfactory cues, showing place preference only if the two types of cues were redundant. In the water maze the alpha-1b knockout mice were unable to learn the task, which was confirmed in a probe trial without platform. They were perfectly able, however, to escape in a visible platform procedure. These results confirm previous findings showing that the noradrenergic pathway is important for the modulation of behaviors such as reaction to novelty and exploration, and suggest that this is mediated, at least partly, through the alpha-1b adrenergic receptors. The lack of alpha-1b adrenergic receptors in spatial orientation does not seem important in cue-rich tasks but may interfere with orientation in situations providing distant cues only.
Resumo:
KNOTS are usually categorized in terms of topological properties that are invariant under changes in a knot's spatial configuration(1-4). Here we approach knot identification from a different angle, by considering the properties of particular geometrical forms which we define as 'ideal'. For a knot with a given topology and assembled from a tube of uniform diameter, the ideal form is the geometrical configuration having the highest ratio of volume to surface area. Practically, this is equivalent to determining the shortest piece of tube that can be closed to form the knot. Because the notion of an ideal form is independent of absolute spatial scale, the length-to-diameter ratio of a tube providing an ideal representation is constant, irrespective of the tube's actual dimensions. We report the results of computer simulations which show that these ideal representations of knots have surprisingly simple geometrical properties. In particular, there is a simple linear relationship between the length-to-diameter ratio and the crossing number-the number of intersections in a two-dimensional projection of the knot averaged over all directions. We have also found that the average shape of knotted polymeric chains in thermal equilibrium is closely related to the ideal representation of the corresponding knot type. Our observations provide a link between ideal geometrical objects and the behaviour of seemingly disordered systems, and allow the prediction of properties of knotted polymers such as their electrophoretic mobility(5).
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Real-world objects are often endowed with features that violate Gestalt principles. In our experiment, we examined the neural correlates of binding under conflict conditions in terms of the binding-by-synchronization hypothesis. We presented an ambiguous stimulus ("diamond illusion") to 12 observers. The display consisted of four oblique gratings drifting within circular apertures. Its interpretation fluctuates between bound ("diamond") and unbound (component gratings) percepts. To model a situation in which Gestalt-driven analysis contradicts the perceptually explicit bound interpretation, we modified the original diamond (OD) stimulus by speeding up one grating. Using OD and modified diamond (MD) stimuli, we managed to dissociate the neural correlates of Gestalt-related (OD vs. MD) and perception-related (bound vs. unbound) factors. Their interaction was expected to reveal the neural networks synchronized specifically in the conflict situation. The synchronization topography of EEG was analyzed with the multivariate S-estimator technique. We found that good Gestalt (OD vs. MD) was associated with a higher posterior synchronization in the beta-gamma band. The effect of perception manifested itself as reciprocal modulations over the posterior and anterior regions (theta/beta-gamma bands). Specifically, higher posterior and lower anterior synchronization supported the bound percept, and the opposite was true for the unbound percept. The interaction showed that binding under challenging perceptual conditions is sustained by enhanced parietal synchronization. We argue that this distributed pattern of synchronization relates to the processes of multistage integration ranging from early grouping operations in the visual areas to maintaining representations in the frontal networks of sensory memory.
Resumo:
Detection and discrimination of visuospatial input involve at least extracting, selecting and encoding relevant information and decision-making processes allowing selecting a response. These two operations are altered, respectively, by attentional mechanisms that change discrimination capacities, and by beliefs concerning the likelihood of uncertain events. Information processing is tuned by the attentional level that acts like a filter on perception, while decision-making processes are weighed by subjective probability of risk. In addition, it has been shown that anxiety could affect the detection of unexpected events through the modification of the level of arousal. Consequently, purpose of this study concerns whether and how decision-making and brain dynamics are affected by anxiety. To investigate these questions, the performance of women with either a high (12) or a low (12) STAI-T (State-Trait Anxiety Inventory, Spielberger, 1983) was examined in a decision-making visuospatial task where subjects have to recognize a target visual pattern from non-target patterns. The target pattern was a schematic image of furniture arranged in such a way as to give the impression of a living room. Non-target patterns were created by either the compression or the dilatation of the distances between objects. Target and non-target patterns were always presented in the same configuration. Preliminary behavioral results show no group difference in reaction time. In addition, visuo-spatial abilities were analyzed trough the signal detection theory for quantifying perceptual decisions in the presence of uncertainty (Green and Swets, 1966). This theory treats detection of a stimulus as a decision-making process determined by the nature of the stimulus and cognitive factors. Astonishingly, no difference in d' (corresponding to the distance between means of the distributions) and c (corresponds to the likelihood ratio) indexes was observed. Comparison of Event-related potentials (ERP) reveals that brain dynamics differ according to anxiety. It shows differences in component latencies, particularly a delay in anxious subjects over posterior electrode sites. However, these differences are compensated during later components by shorter latencies in anxious subjects compared to non-anxious one. These inverted effects seem indicate that the absence of difference in reaction time rely on a compensation of attentional level that tunes cortical activation in anxious subjects, but they have to hammer away to maintain performance.
Resumo:
Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
Abstract : This thesis investigated the spatio-temporal brain mechanisms of three processes involved in recognizing environmental sounds produced by living (animal vocalisations) and man-made (manufactured) objects: their discrimination, their plasticity, and the involvement of action representations. Results showed rapid brain discrimination between these categories beginning at ~70ms. Then, beginning at ~150ms, effects of plasticity are observed, without any influence of the categories of sounds. Both of these processes of discrimination and repetition priming involved brain structures located in temporal and frontal lobes. Activation of brain areas BA21 and BA22 suggest an access to semantic representations and/or linked to object manipulation. To investigate the involvement of action representations in sound recognition, analyses were restricted to sounds produced by man-made objects. Results suggest an access to representations linked to action functionally related to sound rather than to representations linked to action that produced sound. These effects occurred at ~300ms post-stimulus onset and involved differential activity brain regions attributed to the mirror neuron system. These data are discussed in regard to motor preparation of actions functionally linked to sounds. Collectively these data showed a sequential progression of cerebral activity underlying the recognizing of environmental sounds. The processes occurred firstly in a shared network of brain areas before propagating elsewhere and/or leading to differential activity in these structures. Cerebral responses observed in this work allowed establishing a dynamic model of discrimination of sounds produced by living and man-made objects.
Resumo:
We aimed to determine whether human subjects' reliance on different sources of spatial information encoded in different frames of reference (i.e., egocentric versus allocentric) affects their performance, decision time and memory capacity in a short-term spatial memory task performed in the real world. Subjects were asked to play the Memory game (a.k.a. the Concentration game) without an opponent, in four different conditions that controlled for the subjects' reliance on egocentric and/or allocentric frames of reference for the elaboration of a spatial representation of the image locations enabling maximal efficiency. We report experimental data from young adult men and women, and describe a mathematical model to estimate human short-term spatial memory capacity. We found that short-term spatial memory capacity was greatest when an egocentric spatial frame of reference enabled subjects to encode and remember the image locations. However, when egocentric information was not reliable, short-term spatial memory capacity was greater and decision time shorter when an allocentric representation of the image locations with respect to distant objects in the surrounding environment was available, as compared to when only a spatial representation encoding the relationships between the individual images, independent of the surrounding environment, was available. Our findings thus further demonstrate that changes in viewpoint produced by the movement of images placed in front of a stationary subject is not equivalent to the movement of the subject around stationary images. We discuss possible limitations of classical neuropsychological and virtual reality experiments of spatial memory, which typically restrict the sensory information normally available to human subjects in the real world.