982 resultados para visual object categorization
Cognitive disorganisation in schizotypy is associated with deterioration in visual backward masking.
Resumo:
To understand the causes of schizophrenia, a search for stable markers (endophenotypes) is ongoing. In previous years, we have shown that the shine-through visual backward masking paradigm meets the most important characteristics of an endophenotype. Here, we tested masking performance differences between healthy students with low and high schizotypy scores as determined by the self-report O-Life questionnaire assessing schizotypy along three dimensions, i.e. positive schizotypy (unusual experiences), cognitive disorganisation, and negative schizotypy (introvertive anhedonia). Forty participants performed the shine-through backward masking task and a classical cognitive test, the Wisconsin Card Sorting Task (WCST). We found that visual backward masking was impaired for students scoring high as compared to low on the cognitive disorganisation dimension, whereas the positive and negative schizotypy dimensions showed no link to masking performance. We also found group differences for students scoring high and low on the cognitive disorganisation factor for the WCST. These findings indicate that the shine-through paradigm is sensitive to differences in schizotypy which are closely linked with the pathological expression in schizophrenia.
Resumo:
Background and Aims: The international EEsAI study group iscurrently developing the first a ctivity index specific forEosinophilic Esophagitis (EoE). None of the existing dysphagiaquestionnaires take into account the consistency of theingested food t hat considerably impacts the symptompresentation. Goal: To d evelop and evaluate an E oE-specificquestionnaire assessing dysphagia caused by foods of differentconsistencies.Methods: B ased on patient interviews and chart reviews, a nexpert panel ( EEsAI study g roup) identified internationallystandardizedfood prototypes t ypically a ssociated with EoErelateddysphagia. Food consistencies were c orrelated withEoE-related d ysphagia, t aking into account p otential f oodavoidance and f ood processing. This V isual D ysphagiaQuestionnaire (VDQ) was piloted in 20 patients and is currentlyevaluated in a cohort of 150 adult EoE patients.Results: T he following 8 food c onsistency prototypes w ereidentified: soft foods (pudding, jelly), grits, toast bread, Frenchfries, dry rice, ground meat, raw fibrous f oods (eg. apple,carrot), s olid m eat. Dysphagia was r anked o n a 4-point Likertscale (0=no difficulties; 3= severe difficulties, food will not pass).First analysis demonstrated that severity of dysphagia is relatedto the eosinophil load and presence of esophageal strictures.Conclusions: T he VDQ i s the first EoE-specific tool f orassessing dysphagia caused by i nternationally-standardizedfoods of different consistencies. This instrument also addressesfood avoidance behaviour and food processing habits. This toolperformed well in a p ilot study a nd is currently evaluated in acohort of 150 adult EoE patients.
Resumo:
RESUM Com a continuació del treball de final de carrera “Desenvolupament d’un laboratori virtual per a les pràctiques de Biologia Molecular” de Jordi Romero, s’ha realitzat una eina complementaria per a la visualització de molècules integrada en el propi laboratori virtual. Es tracta d’una eina per a la visualització gràfica de gens, ORF, marques i seqüències de restricció de molècules reals o fictícies. El fet de poder treballar amb molècules fictícies és la gran avantatge respecte a les solucions com GENBANK que només permet treballar amb molècules pròpies. Treballar amb molècules fictícies fa que sigui una solució ideal per a l’ensenyament, ja que dóna la possibilitat als professors de realitzar exercicis o demostracions amb molècules reals o dissenyades expressament per a l’exercici a demostrar. A més, permet mostrar de forma visual les diferents parts simultàniament o per separat, de manera que ofereix una primera aproximació interpretació dels resultats. Per altra banda, permet marcar gens, crear marques, localitzar seqüències de restricció i generar els ORF de la molècula que nosaltres creem o modificar una ja existent. Per l’implementació, s’ha continuat amb l’idea de separar la part de codi i la part de disseny en les aplicacions Flash. Per fer-ho, s’ha utilitzat la plataforma de codi lliure Ariware ARPv2.02 que proposa un marc de desenvolupament d’aplicacions Flash orientades a objectes amb el codi (classes ActionScript 2.0) separats del movieclip. Per al processament de dades s’ha fet servir Perl per ser altament utilitzat en Bioinformàtica i per velocitat de càlcul. Les dades generades es guarden en una Base de Dades en MYSQL (de lliure distribució), de la que s’extreuen les dades per generar fitxers XML, fent servir tant PHP com la plataforma AMFPHP com a enllaç entre Flash i la resta de parts.
Resumo:
Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Resumo:
The kitten's auditory cortex (including the first and second auditory fields AI and AII) is known to send transient axons to either ipsi- or contralateral visual areas 17 and 18. By the end of the first postnatal month the transitory axons, but not their neurons of origin, are eliminated. Here we investigated where these neurons project after the elimination of the transitory axon. Eighteen kittens received early (postnatal day (pd) 2 - 5) injections of long lasting retrograde fluorescent traces in visual areas 17 and 18 and late (pd 35 - 64) injections of other retrograde fluorescent tracers in either hemisphere, mostly in areas known to receive projections from AI and AII in the adult cat. The middle ectosylvian gyrus was analysed for double-labelled neurons in the region corresponding approximately to AI and AII. Late injections in the contralateral (to the analysed AI, AII) hemisphere including all of the known auditory areas, as well as some visual and 'association' areas, did not relabel neurons which had had transient projections to either ipsi- or contralateral visual areas 17 - 18. Thus, AI and AII neurons after eliminating their transient juvenile projections to visual areas 17 and 18 do not project to the other hemisphere. In contrast, relabelling was obtained with late injections in several locations in the ipsilateral hemisphere; it was expressed as per cent of the population labelled by the early injections. Few neurons (0 - 2.5%) were relabelled by large injections in the caudal part of the posterior ectosylvian gyrus and the adjacent posterior suprasylvian sulcus (areas DP, P, VP). Multiple injections in the middle ectosylvian gyrus relabelled a considerably larger percentage of neurons (13%). Single small injections in the middle ectosylvian gyrus (areas AI, AII), the caudal part of the anterior ectosylvian gyrus and the rostral part of the posterior ectosylvian gyrus relabelled 3.1 - 7.0% of neurons. These neurons were generally near (<2.0 mm) the outer border of the late injection sites. Neurons with transient projections to ipsi- or contralateral visual areas 17 and 18 were relabelled in similar proportions by late injections at any given location. Thus, AI or AII neurons which send a transitory axon to ipsi- or contralateral visual areas 17 and 18 are most likely to form short permanent cortical connections. In that respect, they are similar to medial area 17 neurons that form transitory callosal axons and short permanent axons to ipsilateral visual areas 17 and 18.
Resumo:
Animal dispersal in a fragmented landscape depends on the complex interaction between landscape structure and animal behavior. To better understand how individuals disperse, it is important to explicitly represent the properties of organisms and the landscape in which they move. A common approach to modelling dispersal includes representing the landscape as a grid of equal sized cells and then simulating individual movement as a correlated random walk. This approach uses a priori scale of resolution, which limits the representation of all landscape features and how different dispersal abilities are modelled. We develop a vector-based landscape model coupled with an object-oriented model for animal dispersal. In this spatially explicit dispersal model, landscape features are defined based on their geographic and thematic properties and dispersal is modelled through consideration of an organism's behavior, movement rules and searching strategies (such as visual cues). We present the model's underlying concepts, its ability to adequately represent landscape features and provide simulation of dispersal according to different dispersal abilities. We demonstrate the potential of the model by simulating two virtual species in a real Swiss landscape. This illustrates the model's ability to simulate complex dispersal processes and provides information about dispersal such as colonization probability and spatial distribution of the organism's path
Resumo:
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.