143 resultados para facial images
Resumo:
The along-track stereo images of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor with 15 m resolution were used to generate Digital Elevation Model (DEM) on an area with low and near Mean Sea Level (MSL) elevation in Johor, Malaysia. The absolute DEM was generated by using the Rational Polynomial Coefficient (RPC) model which was run on ENVI 4.8 software. In order to generate the absolute DEM, 60 Ground Control Pointes (GCPs) with almost vertical accuracy less than 10 meter extracted from topographic map of the study area. The assessment was carried out on uncorrected and corrected DEM by utilizing dozens of Independent Check Points (ICPs). Consequently, the uncorrected DEM showed the RMSEz of ± 26.43 meter which was decreased to the RMSEz of ± 16.49 meter for the corrected DEM after post-processing. Overall, the corrected DEM of ASTER stereo images met the expectations.
Resumo:
Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching, and butterfly wing matching) to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety-two children aged 5–15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.
Resumo:
Schizophrenia patients have been shown to be compromised in their ability to recognize facial emotion. This deficit has been shown to be related to negative symptoms severity. However, to date, most studies have used static rather than dynamic depictions of faces. Nineteen patients with schizophrenia were compared with seventeen controls on 2 tasks; the first involving the discrimination of facial identity, emotion, and butterfly wings; the second testing emotion recognition using both static and dynamic stimuli. In the first task, the patients performed more poorly than controls for emotion discrimination only, confirming a specific deficit in facial emotion recognition. In the second task, patients performed more poorly in both static and dynamic facial emotion processing. An interesting pattern of associations suggestive of a possible double dissociation emerged in relation to correlations with symptom ratings: high negative symptom ratings were associated with poorer recognition of static displays of emotion, whereas high positive symptom ratings were associated with poorer recognition of dynamic displays of emotion. However, while the strength of associations between negative symptom ratings and accuracy during static and dynamic facial emotion processing was significantly different, those between positive symptom ratings and task performance were not. The results confirm a facial emotion-processing deficit in schizophrenia using more ecologically valid dynamic expressions of emotion. The pattern of findings may reflect differential patterns of cortical dysfunction associated with negative and positive symptoms of schizophrenia in the context of differential neural mechanisms for the processing of static and dynamic displays of facial emotion.
Resumo:
While the neural regions associated with facial identity recognition are considered to be well defined, the neural correlates of non-moving and moving images of facial emotion processing are less clear. This study examined the brain electrical activity changes in 26 participants (14 males M = 21.64, SD = 3.99; 12 females M = 24.42, SD = 4.36), during a passive face viewing task, a scrambled face task and separate emotion and gender face discrimination tasks. The steady state visual evoked potential (SSVEP) was recorded from 64-electrode sites. Consistent with previous research, face related activity was evidenced at scalp regions over the parieto-temporal region approximately 170 ms after stimulus presentation. Results also identified different SSVEP spatio-temporal changes associated with the processing of static and dynamic facial emotions with respect to gender, with static stimuli predominately associated with an increase in inhibitory processing within the frontal region. Dynamic facial emotions were associated with changes in SSVEP response within the temporal region, which are proposed to index inhibitory processing. It is suggested that static images represent non-canonical stimuli which are processed via different mechanisms to their more ecologically valid dynamic counterparts.
Resumo:
Facial identity and facial expression matching tasks were completed by 5–12-year-old children and adults using stimuli extracted from the same set of normalized faces. Configural and feature processing were examined using speed and accuracy of responding and facial feature selection, respectively. Facial identity matching was slower than face expression matching for all age groups. Large age effects were found on both speed and accuracy of responding and feature use in both identity and expression matching tasks. Eye region preference was found on the facial identity task and mouth region preference on the facial expression task. Use of mouth region information for facial expression matching increased with age, whereas use of eye region information for facial identity matching peaked early. The feature use information suggests that the specific use of primary facial features to arrive at identity and emotion matching judgments matures across middle childhood.
Resumo:
This paper presents an online, unsupervised training algorithm enabling vision-based place recognition across a wide range of changing environmental conditions such as those caused by weather, seasons, and day-night cycles. The technique applies principal component analysis to distinguish between aspects of a location’s appearance that are condition-dependent and those that are condition-invariant. Removing the dimensions associated with environmental conditions produces condition-invariant images that can be used by appearance-based place recognition methods. This approach has a unique benefit – it requires training images from only one type of environmental condition, unlike existing data-driven methods that require training images with labelled frame correspondences from two or more environmental conditions. The method is applied to two benchmark variable condition datasets. Performance is equivalent or superior to the current state of the art despite the lesser training requirements, and is demonstrated to generalise to previously unseen locations.
Resumo:
In studies of germ cell transplantation, measureing tubule diameters and counting cells from different populations using antibodies as markers are very important. Manual measurement of tubule sizes and cell counts is a tedious and sanity grinding work. In this paper, we propose a new boundary weighting based tubule detection method. We first enhance the linear features of the input image and detect the approximate centers of tubules. Next, a boundary weighting transform is applied to the polar transformed image of each tubule region and a circular shortest path is used for the boundary detection. Then, ellipse fitting is carried out for tubule selection and measurement. The algorithm has been tested on a dataset consisting of 20 images, each having about 20 tubules. Experiments show that the detection results of our algorithm are very close to the results obtained manually. © 2013 IEEE.
Resumo:
Intensity Modulated Radiotherapy (IMRT) is a well established technique for delivering highly conformal radiation dose distributions. The complexity of the delivery techniques and high dose gradients around the target volume make verification of the patient treatment crucial to the success of the treatment. Conventional treatment protocols involve imaging the patient prior to treatment, comparing the patient set-up to the planned set-up and then making any necessary shifts in the patient position to ensure target volume coverage. This paper presents a method for calibrating electronic portal imaging device (EPID) images acquired during IMRT delivery so that they can be used for verifying the patient set-up.
Resumo:
With the increasing availability of high quality digital cameras that are easily operated by the non-professional photographer, the utility of using digital images to assess endpoints in clinical research of skin lesions has growing acceptance. However, rigorous protocols and description of experiences for digital image collection and assessment are not readily available, particularly for research conducted in remote settings. We describe the development and evaluation of a protocol for digital image collection by the non-professional photographer in a remote setting research trial, together with a novel methodology for assessment of clinical outcomes by an expert panel blinded to treatment allocation.
Resumo:
The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.
Resumo:
People with schizophrenia perform poorly when recognising facial expressions of emotion, particularly negative emotions such as fear. This finding has been taken as evidence of a “negative emotion specific deficit”, putatively associated with a dysfunction in the limbic system, particularly the amygdala. An alternative explanation is that greater difficulty in recognising negative emotions may reflect a priori differences in task difficulty. The present study uses a differential deficit design to test the above argument. Facial emotion recognition accuracy for seven emotion categories was compared across three groups. Eighteen schizophrenia patients and one group of healthy age- and gender-matched controls viewed identical sets of stimuli. A second group of 18 age- and gender-matched controls viewed a degraded version of the same stimuli. The level of stimulus degradation was chosen so as to equate overall level of accuracy to the schizophrenia patients. Both the schizophrenia group and the degraded image control group showed reduced overall recognition accuracy and reduced recognition accuracy for fearful and sad facial stimuli compared with the intact-image control group. There were no differences in recognition accuracy for any emotion category between the schizophrenia group and the degraded image control group. These findings argue against a negative emotion specific deficit in schizophrenia.
Resumo:
Empirical evidence suggests impaired facial emotion recognition in schizophrenia. However, the nature of this deficit is the subject of ongoing research. The current study tested the hypothesis that a generalized deficit at an early stage of face-specific processing (i.e. putatively subserved by the fusiform gyrus) accounts for impaired facial emotion recognition in schizophrenia as opposed to the Negative Emotion-specific Deficit Model, which suggests impaired facial information processing at subsequent stages. Event-related potentials (ERPs) were recorded from 11 schizophrenia patients and 15 matched controls while performing a gender discrimination and a facial emotion recognition task. Significant reduction of the face-specific vertex positive potential (VPP) at a peak latency of 165 ms was confirmed in schizophrenia subjects whereas their early visual processing, as indexed by P1, was found to be intact. Attenuated VPP was found to correlate with subsequent P3 amplitude reduction and to predict accuracy when performing a facial emotion discrimination task. A subset of ten schizophrenia patients and ten matched healthy control subjects also performed similar tasks in the magnetic resonance imaging scanner. Patients showed reduced blood oxygenation level-dependent (BOLD) activation in the fusiform, inferior frontal, middle temporal and middle occipital gyrus as well as in the amygdala. Correlation analyses revealed that VPP and the subsequent P3a ERP components predict fusiform gyrus BOLD activation. These results suggest that problems in facial affect recognition in schizophrenia may represent flow-on effects of a generalized deficit in early visual processing.
Resumo:
Patients with a number of psychiatric and neuropathological conditions demonstrate problems in recognising facial expressions of emotion. Research indicating that patients with schizophrenia perform more poorly in the recognition of negative valence facial stimuli than positive valence stimuli has been interpreted as evidence of a negative emotion specific deficit. An alternate explanation rests in the psychometric properties of the stimulus materials. This model suggests that the pattern of impairment observed in schizophrenia may reflect initial discrepancies in task difficulty between stimulus categories, which are not apparent in healthy subjects because of ceiling effects. This hypothesis is tested, by examining the performance of healthy subjects in a facial emotion categorisation task with three levels of stimulus resolution. Results confirm the predictions of the model, showing that performance degrades differentially across emotion categories, with the greatest deterioration to negative valence stimuli. In the light of these results, a possible methodology for detecting emotion specific deficits in clinical samples is discussed.
Resumo:
Neuroimaging research has shown localised brain activation to different facial expressions. This, along with the finding that schizophrenia patients perform poorly in their recognition of negative emotions, has raised the suggestion that patients display an emotion specific impairment. We propose that this asymmetry in performance reflects task difficulty gradations, rather than aberrant processing in neural pathways subserving recognition of specific emotions. A neural network model is presented, which classifies facial expressions on the basis of measurements derived from human faces. After training, the network showed an accuracy pattern closely resembling that of healthy subjects. Lesioning of the network led to an overall decrease in the network’s discriminant capacity, with the greatest accuracy decrease to fear, disgust and anger stimuli. This implies that the differential pattern of impairment in schizophrenia patients can be explained without having to postulate impairment of specific processing modules for negative emotion recognition.