854 resultados para Semantic annotations
Resumo:
Discussion tools in existing LEs have few or no integrated tools to analyse student learning. This paper proposes tools not only for integrating social network analytics, but also why we need to semantically tag and track key concepts within posts in order to make student learning in discussions visible. This paper will argue for the importance of semantic markup in discussion tools using screenshots of existing LEs and UI mockups of semantically aware discussion tools to argue the case for this element of next generation LEs
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.
Resumo:
Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.
Resumo:
Stimuli that cannot be perceived (i.e., that are subliminal) can still elicit neural responses in an observer, but can such stimuli influence behavior and higher-order cognition? Empirical evidence for such effects has periodically been accepted and rejected over the last six decades. Today, many psychologists seem to consider such effects well-established and recent studies have extended the power of subliminal processing to new limits. In this thesis, I examine whether this shift in zeitgeist is matched by a shift in evidential strength for the phenomenon. This thesis consists of three empirical studies involving more than 250 participants, a simulation study, and a quantitative review. The conclusion based on these efforts is that several methodological, statistical, and theoretical issues remain in studies of subliminal processing. These issues mean that claimed subliminal effects might be caused by occasional or weak percepts (given the experimenters’ own definitions of perception) and that it is still unclear what evidence there is for the cognitive processing of subliminal stimuli. New data are presented suggesting that even in conditions traditionally claimed as “subliminal”, occasional or weak percepts may in fact influence cognitive processing more strongly than do the physical stimuli, possibly leading to reversed priming effects. I also summarize and provide methodological, statistical, and theoretical recommendations that could benefit future research aspiring to provide solid evidence for subliminal cognitive processing.
Resumo:
Recent empirical studies about the neurological executive nature of reading in bilinguals differ in their evaluations of the degree of selective manifestation in lexical access as implicated by data from early and late reading measures in the eye-tracking paradigm. Currently two scenarios are plausible: (1) Lexical access in reading is fundamentally language non-selective and top-down effects from semantic context can influence the degree of selectivity in lexical access; (2) Cross-lingual lexical activation is actuated via bottom-up processes without being affected by top-down effects from sentence context. In an attempt to test these hypotheses empirically, this study analyzed reader-text events arising when cognate facilitation and semantic constraint interact in a 22 factorially designed experiment tracking the eye movements of 26 Swedish-English bilinguals reading in their L2. Stimulus conditions consisted of high- and low-constraint sentences embedded with either a cognate or a non-cognate control word. The results showed clear signs of cognate facilitation in both early and late reading measures and in either sentence conditions. This evidence in favour of the non-selective hypothesis indicates that the manifestation of non-selective lexical access in reading is not constrained by top-down effects from semantic context.
Resumo:
La presente ricerca tratta lo studio delle basi di conoscenza, volto a facilitare la raccolta, l'organizzazione e la distribuzione della conoscenza. La scelta dell’oggetto è dovuta all'importanza sempre maggiore acquisita da questo ambito di ricerca e all'innovazione che esso è in grado di apportare nel campo del Web semantico. Viene analizzata la base di conoscenza YAGO: se ne descrivono lo stato dell’arte, le applicazioni e i progetti per sviluppi futuri. Il lavoro è stato condotto esaminando le pubblicazioni relative al tema e rappresenta una risorsa in lingua italiana sull'argomento.
Resumo:
This study provides evidence for a Stroop-like interference effect in word recognition. Based on phonologic and semantic properties of simple words, participants who performed a same/different word-recognition task exhibited a significant response latency increase when word pairs (e.g., POLL, ROD) featured a comparison word (POLL) that was a homonym of a synonym (pole) of the target word (ROD). These results support a parallel-processing framework of lexical decision making, in which activation of the pathways to word recognition may occur at different levels automatically and in parallel. A subset of simple words that are also brand names was examined and exhibited this same interference. Implications for word recognition theory and practical implications for strategic marketing are discussed.
Resumo:
In this paper we use concepts from graph theory and cellular biology represented as ontologies, to carry out semantic mining tasks on signaling pathway networks. Specifically, the paper describes the semantic enrichment of signaling pathway networks. A cell signaling network describes the basic cellular activities and their interactions. The main contribution of this paper is in the signaling pathway research area, it proposes a new technique to analyze and understand how changes in these networks may affect the transmission and flow of information, which produce diseases such as cancer and diabetes. Our approach is based on three concepts from graph theory (modularity, clustering and centrality) frequently used on social networks analysis. Our approach consists into two phases: the first uses the graph theory concepts to determine the cellular groups in the network, which we will call them communities; the second uses ontologies for the semantic enrichment of the cellular communities. The measures used from the graph theory allow us to determine the set of cells that are close (for example, in a disease), and the main cells in each community. We analyze our approach in two cases: TGF-β and the Alzheimer Disease.
Resumo:
The number of connected devices collecting and distributing real-world information through various systems, is expected to soar in the coming years. As the number of such connected devices grows, it becomes increasingly difficult to store and share all these new sources of information. Several context representation schemes try to standardize this information, but none of them have been widely adopted. In previous work we addressed this challenge, however our solution had some drawbacks: poor semantic extraction and scalability. In this paper we discuss ways to efficiently deal with representation schemes' diversity and propose a novel d-dimension organization model. Our evaluation shows that d-dimension model improves scalability and semantic extraction.
Resumo:
In recent years the technological world has grown by incorporating billions of small sensing devices, collecting and sharing real-world information. As the number of such devices grows, it becomes increasingly difficult to manage all these new information sources. There is no uniform way to share, process and understand context information. In previous publications we discussed efficient ways to organize context information that is independent of structure and representation. However, our previous solution suffers from semantic sensitivity. In this paper we review semantic methods that can be used to minimize this issue, and propose an unsupervised semantic similarity solution that combines distributional profiles with public web services. Our solution was evaluated against Miller-Charles dataset, achieving a correlation of 0.6.
Resumo:
In this thesis, we propose to infer pixel-level labelling in video by utilising only object category information, exploiting the intrinsic structure of video data. Our motivation is the observation that image-level labels are much more easily to be acquired than pixel-level labels, and it is natural to find a link between the image level recognition and pixel level classification in video data, which would transfer learned recognition models from one domain to the other one. To this end, this thesis proposes two domain adaptation approaches to adapt the deep convolutional neural network (CNN) image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of unlabelled video data. Our proposed approaches explicitly model and compensate for the domain adaptation from the source domain to the target domain which in turn underpins a robust semantic object segmentation method for natural videos. We demonstrate the superior performance of our methods by presenting extensive evaluations on challenging datasets comparing with the state-of-the-art methods.
Resumo:
The current study investigated the cognitive workload of sentence and clause wrap-up in younger and older readers. A large number of studies have demonstrated the presence of wrap-up effects, peaks in processing time at clause and sentence boundaries that some argue reflect attention to organizational and integrative semantic processes. However, the exact nature of these wrap-up effects is still not entirely clear, with some arguing that wrap-up is not related to processing difficulty, but rather is triggered by a low-level oculomotor response or the implicit monitoring of intonational contour. The notion that wrap-up effects are resource-demanding was directly tested by examining the degree to which sentence and clause wrap-up affects the parafoveal preview benefit. Older and younger adults read passages in which a target word N occurred in a sentence-internal, clause-final, or sentence-final position. A gaze-contingent boundary change paradigm was used in which, on some trials, a non-word preview of word N+1 was replaced by a target word once the eyes crossed an invisible boundary located between words N and N+1. All measures of reading time on word N were longer at clause and sentence boundaries than in the sentence-internal position. In the earliest measures of reading time, sentence and clause wrap-up showed evidence of reducing the magnitude of the preview benefit similarly for younger and older adults. However, this effect was moderated by age in gaze duration, such that older adults showed a complete reduction in the preview benefit in the sentence-final condition. Additionally, sentence and clause wrap-up were negatively associated with the preview benefit. Collectively, the findings from the current study suggest that wrap-up is cognitively demanding and may be less efficient with age, thus, resulting in a reduction of the parafoveal preview during normal reading.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.