921 resultados para semantic segmentation
Resumo:
We used event-related functional magnetic resonance imaging (fMRI) to investigate neural responses associated with the semantic interference (SI) effect in the picture-word task. Independent stage models of word production assume that the locus of the SI effect is at the conceptual processing level (Levelt et al. [1999]: Behav Brain Sci 22:1-75), whereas interactive models postulate that it occurs at phonological retrieval (Starreveld and La Heij [1996]: J Exp Psychol Learn Mem Cogn 22:896-918). In both types of model resolution of the SI effect occurs as a result of competitive, spreading activation without the involvement of inhibitory links. These assumptions were tested by randomly presenting participants with trials from semantically-related and lexical control distractor conditions and acquiring image volumes coincident with the estimated peak hemodynamic response for each trial. Overt vocalization of picture names occurred in the absence of scanner noise, allowing reaction time (RT) data to be collected. Analysis of the RT data confirmed the SI effect. Regions showing differential hemodynamic responses during the SI effect included the left mid section of the middle temporal gyrus, left posterior superior temporal gyrus, left anterior cingulate cortex, and bilateral orbitomedial prefrontal cortex. Additional responses were observed in the frontal eye fields, left inferior parietal lobule, and right anterior temporal and occipital cortex. The results are interpreted as indirectly supporting interactive models that allow spreading activation between both conceptual processing and phonological retrieval levels of word production. In addition, the data confirm that selective attention/response suppression has a role in resolving the SI effect similar to the way in which Stroop interference is resolved. We conclude that neuroimaging studies can provide information about the neuroanatomical organization of the lexical system that may prove useful for constraining theoretical models of word production. (C) 2001 Wiley-Liss, Inc.
Resumo:
Lateral ventricular volumes based on segmented brain MR images can be significantly underestimated if partial volume effects are not considered. This is because a group of voxels in the neighborhood of lateral ventricles is often mis-classified as gray matter voxels due to partial volume effects. This group of voxels is actually a mixture of ventricular cerebro-spinal fluid and the white matter and therefore, a portion of it should be included as part of the lateral ventricular structure. In this note, we describe an automated method for the measurement of lateral ventricular volumes on segmented brain MR images. Image segmentation was carried in combination of intensity correction and thresholding. The method is featured with a procedure for addressing mis-classified voxels in the surrounding of lateral ventricles. A detailed analysis showed that lateral ventricular volumes could be underestimated by 10 to 30% depending upon the size of the lateral ventricular structure, if mis-classified voxels were not included. Validation of the method was done through comparison with the averaged manually traced volumes. Finally, the merit of the method is demonstrated in the evaluation of the rate of lateral ventricular enlargement. (C) 2001 Elsevier Science Inc. All rights reserved.
Resumo:
Given the importance of syllables in the development of reading, spelling, and phonological awareness, information is needed about how children syllabify spoken words. To what extent is syllabification affected by knowledge of spelling, to what extent by phonology, and which phonological factors are influential? In Experiment 1, six- and seven-year-old children did not show effects of spelling on oral syllabification, performing similarly on words such as habit and rabbit. Spelling influenced the syllabification of older children and adults, with the results suggesting that knowledge of spelling must be well entrenched before it begins to affect oral syllabification. Experiment 2 revealed influences of phonological factors on syllabification that were similar across age groups. Young children, like older children and adults, showed differences between words with short and long vowels (e.g., lemon vs. demon) and words with sonorant and obstruent intervocalic consonants (e.g., melon vs. wagon). (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
The impact of basal ganglia dysfunction on semantic processing was investigated by comparing the performance of individuals with nonthalamic subcortical (NS) vascular lesions, Parkinson's disease (PD), cortical lesions, and matched controls on a semantic priming task. Unequibiased lexical ambiguity primes were used in auditory prime-target pairs comprising 4 critical conditions; dominant related (e.g., bank-money), subordinate related (e.g., bank-river), dominant unrelated (e.g.,foot-money) and subordinate unrelated (e.g., bat-river). Participants made speeded lexical decisions (word/nonword) on targets using a go-no-go response. When a short prime-target interstimulus interval (ISI) of 200 ins was employed, all groups demonstrated priming for dominant and subordinate conditions, indicating nonselective meaning facilitation and intact automatic lexical processing. Differences emerged at the long ISI (1250 ms), where control and cortical lesion participants evidenced selective facilitation of the dominant meaning, whereas NS and PD groups demonstrated a protracted period of nonselective meaning facilitation. This finding suggests a circumscribed deficit in the selective attentional engagement of the semantic network on the basis of meaning frequency, possibly implicating a disturbance of frontal-subcortical systems influencing inhibitory semantic mechanisms.
Resumo:
In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Resumo:
While fluoroscopy is still the most widely used imaging modality to guide cardiac interventions, the fusion of pre-operative Magnetic Resonance Imaging (MRI) with real-time intra-operative ultrasound (US) is rapidly gaining clinical acceptance as a viable, radiation-free alternative. In order to improve the detection of the left ventricular (LV) surface in 4D ultrasound, we propose to take advantage of the pre-operative MRI scans to extract a realistic geometrical model representing the patients cardiac anatomy. This could serve as prior information in the interventional setting, allowing to increase the accuracy of the anatomy extraction step in US data. We have made use of a real-time 3D segmentation framework used in the recent past to solve the LV segmentation problem in MR and US data independently and we take advantage of this common link to introduce the prior information as a soft penalty term in the ultrasound segmentation algorithm. We tested the proposed algorithm in a clinical dataset of 38 patients undergoing both MR and US scans. The introduction of the personalized shape prior improves the accuracy and robustness of the LV segmentation, as supported by the error reduction when compared to core lab manual segmentation of the same US sequences.
Resumo:
One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.
Resumo:
Quantitative analysis of cine cardiac magnetic resonance (CMR) images for the assessment of global left ventricular morphology and function remains a routine task in clinical cardiology practice. To date, this process requires user interaction and therefore prolongs the examination (i.e. cost) and introduces observer variability. In this study, we sought to validate the feasibility, accuracy, and time efficiency of a novel framework for automatic quantification of left ventricular global function in a clinical setting.
Resumo:
We provide all agent; the capability to infer the relations (assertions) entailed by the rules that, describe the formal semantics of art RDFS knowledge-base. The proposed inferencing process formulates each semantic restriction as a rule implemented within a, SPARQL query statement. The process expands the original RDF graph into a fuller graph that. explicitly captures the rule's described semantics. The approach is currently being explored in order to support descriptions that follow the generic Semantic Web Rule Language. An experiment, using the Fire-Brigade domain, a small-scale knowledge-base, is adopted to illustrate the agent modeling method and the inferencing process.
Resumo:
Mestrado em Engenharia Informática
Resumo:
The first and second authors would like to thank the support of the PhD grants with references SFRH/BD/28817/2006 and SFRH/PROTEC/49517/2009, respectively, from Fundação para a Ciência e Tecnol ogia (FCT). This work was partially done in the scope of the project “Methodologies to Analyze Organs from Complex Medical Images – Applications to Fema le Pelvic Cavity”, wi th reference PTDC/EEA- CRO/103320/2008, financially supported by FCT.
Resumo:
In this paper we discuss how the inclusion of semantic functionalities in a Learning Objects Repository allows a better characterization of the learning materials enclosed and improves their retrieval through the adoption of some query expansion strategies. Thus, we started to regard the use of ontologies to automatically suggest additional concepts when users are filling some metadata fields and add new terms to the ones initially provided when users specify the keywords with interest in a query. Dealing with different domain areas and having considered impractical the development of many different ontologies, we adopted some strategies for reusing ontologies in order to have the knowledge necessary in our institutional repository. In this paper we make a review of the area of knowledge reuse and discuss our approach.