921 resultados para semantic segmentation
Resumo:
There is something peculiar about aesthetic testimony. It seems more difficult to gain knowledge of aesthetic properties based solely upon testimony than it is in the case of other types of property. In this paper, I argue that we can provide an adequate explanation at the level of the semantics of aesthetic language, without defending any substantive thesis in epistemology or about aesthetic value/judgement. If aesthetic predicates are given a non-invariantist semantics, we can explain the supposed peculiar difficulty with aesthetic testimony.
Resumo:
Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia (SA), characterised by poor executive control of semantic processing across verbal and nonverbal modalities, and (ii) Wernicke’s aphasia (WA), associated with poor auditory-verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well-understood. Both patient groups exhibit some type of semantic ‘access’ deficit, as opposed to the ‘storage’ deficits observed in semantic dementia. Nevertheless, existing descriptions suggest these patients might have different varieties of ‘access’ impairment – related to difficulty resolving competition (in SA) vs. initial activation of concepts from sensory inputs (in WA). We used a case-series design to compare WA and SA patients on Warrington’s paradigmatic assessment of semantic ‘access’ deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic ‘blocking’ effects). WA and SA patients were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability – one that mapped onto classical ‘syndromes’ and one that did not – predicted aspects of the semantic ‘access’ deficit. Both SA and WA cases showed multimodal semantic impairment, although as expected the WA group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially beneficial effects of stimulus repetition: WA cases showed initial improvement with repetition of words and pictures, while in SA, semantic access was initially good but declined in the face of competition from previous targets. Prefrontal damage predicted the harmful effects of repetition: the ability to re-select both word and picture targets in the face of mounting competition was linked to left prefrontal damage in both groups. Therefore, SA and WA patients have partially distinct impairment of semantic ‘access’ but, across these syndromes, prefrontal lesions produce declining comprehension with repetition in both verbal and non-verbal tasks.
Resumo:
In this paper we present a novel approach to detect people meeting. The proposed approach works by translating people behaviour from trajectory information into semantic terms. Having available a semantic model of the meeting behaviour, the event detection is performed in the semantic domain. The model is learnt employing a soft-computing clustering algorithm that combines trajectory information and motion semantic terms. A stable representation can be obtained from a series of examples. Results obtained on a series of videos with different types of meeting situations show that the proposed approach can learn a generic model that can effectively be applied on the behaviour recognition of meeting situations.
Resumo:
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic information. With our proposed approach, we address the challenge set in PETS 2014 on recognising behaviours of interest around a parked vehicle, namely the abnormal behaviour of someone walking around the vehicle.
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
Resumo:
While a multitude of motion segmentation algorithms have been presented in the literature, there has not been an objective assessment of different approaches to fusing their outputs. This paper investigates the application of 4 different fusion schemes to the outputs of 3 probabilistic pixel-level segmentation algorithms. We performed an extensive experimentation using 6 challenge categories from the changedetection.net dataset demonstrating that in general simple majority vote proves to be more effective than more complex fusion schemes.
Resumo:
This paper investigates the potential of fusion at normalisation/segmentation level prior to feature extraction. While there are several biometric fusion methods at data/feature level, score level and rank/decision level combining raw biometric signals, scores, or ranks/decisions, this type of fusion is still in its infancy. However, the increasing demand to allow for more relaxed and less invasive recording conditions, especially for on-the-move iris recognition, suggests to further investigate fusion at this very low level. This paper focuses on the approach of multi-segmentation fusion for iris biometric systems investigating the benefit of combining the segmentation result of multiple normalisation algorithms, using four methods from two different public iris toolkits (USIT, OSIRIS) on the public CASIA and IITD iris datasets. Evaluations based on recognition accuracy and ground truth segmentation data indicate high sensitivity with regards to the type of errors made by segmentation algorithms.
Resumo:
Complete information dispositional metasemantics says that our expressions get their meaning in virtue of what our dispositions to apply those terms would be given complete information. The view has recently been advanced and argued to have a number of attractive features. I argue that that it threatens to make the meanings of our words indeterminate and doesn't do what it was that made a dispositional view attractive in the first place.
Resumo:
We present an account of semantic representation that focuses on distinct types of information from which word meanings can be learned. In particular, we argue that there are at least two major types of information from which we learn word meanings. The first is what we call experiential information. This is data derived both from our sensory-motor interactions with the outside world, as well as from our experience of own inner states, particularly our emotions. The second type of information is language-based. In particular, it is derived from the general linguistic context in which words appear. The paper spells out this proposal, summarizes research supporting this view and presents new predictions emerging from this framework.
Resumo:
This study uses the Deese-Roediger-McDermott paradigm to investigate how deaf children with cochlear implants organize their semantic networks as compared to their hearing age-mates.
Resumo:
This paper is about the use of natural language to communicate with computers. Most researches that have pursued this goal consider only requests expressed in English. A way to facilitate the use of several languages in natural language systems is by using an interlingua. An interlingua is an intermediary representation for natural language information that can be processed by machines. We propose to convert natural language requests into an interlingua [universal networking language (UNL)] and to execute these requests using software components. In order to achieve this goal, we propose OntoMap, an ontology-based architecture to perform the semantic mapping between UNL sentences and software components. OntoMap also performs component search and retrieval based on semantic information formalized in ontologies and rules.
Resumo:
Robotic mapping is the process of automatically constructing an environment representation using mobile robots. We address the problem of semantic mapping, which consists of using mobile robots to create maps that represent not only metric occupancy but also other properties of the environment. Specifically, we develop techniques to build maps that represent activity and navigability of the environment. Our approach to semantic mapping is to combine machine learning techniques with standard mapping algorithms. Supervised learning methods are used to automatically associate properties of space to the desired classification patterns. We present two methods, the first based on hidden Markov models and the second on support vector machines. Both approaches have been tested and experimentally validated in two problem domains: terrain mapping and activity-based mapping.
Resumo:
Chaotic synchronization has been discovered to be an important property of neural activities, which in turn has encouraged many researchers to develop chaotic neural networks for scene and data analysis. In this paper, we study the synchronization role of coupled chaotic oscillators in networks of general topology. Specifically, a rigorous proof is presented to show that a large number of oscillators with arbitrary geometrical connections can be synchronized by providing a sufficiently strong coupling strength. Moreover, the results presented in this paper not only are valid to a wide class of chaotic oscillators, but also cover the parameter mismatch case. Finally, we show how the obtained result can be applied to construct an oscillatory network for scene segmentation.
Resumo:
Synchronization and chaos play important roles in neural activities and have been applied in oscillatory correlation modeling for scene and data analysis. Although it is an extensively studied topic, there are still few results regarding synchrony in locally coupled systems. In this paper we give a rigorous proof to show that large numbers of coupled chaotic oscillators with parameter mismatch in a 2D lattice can be synchronized by providing a sufficiently large coupling strength. We demonstrate how the obtained result can be applied to construct an oscillatory network for scene segmentation. (C) 2007 Elsevier B.V. All rights reserved.