960 resultados para Semantic Repository
Resumo:
The objective of D6.1 is to make the Ecosystem software platform with underlying Software Repository, Digital Library and Media Archive available to the degree, that the RAGE project can start collecting content in the form of software assets, and documents of various media types. This paper describes the current state of the Ecosystem as of month 12 of the project, and documents the structure of the Ecosystem, individual components, integration strategies, and overall approach. The deliverable itself is the deployment of the described components, which is now available to collect and curate content. Whilst this version is not yet feature complete, full realization is expected within the next few months. Following this development, WP6 will continue to add features driven by the business models to be defined by WP7 later on in the project.
Resumo:
The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.
Resumo:
Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.
Resumo:
Repositories containing high quality human biospecimens linked with robust and relevant clinical and pathological information are required for the discovery and validation of biomarkers for disease diagnosis, progression and response to treatment. Current molecular based discovery projects using either low or high throughput technologies rely heavily on ready access to such sample collections. It is imperative that modern biobanks align with molecular diagnostic pathology practices not only to provide the type of samples needed for discovery projects but also to ensure requirements for ongoing sample collections and the future needs of researchers are adequately addressed. Biobanks within comprehensive molecular pathology programmes are perfectly positioned to offer more than just tumour derived biospecimens; for example, they have the ability to facilitate researchers gaining access to sample metadata such as digitised scans of tissue samples annotated prior to macrodissection for molecular diagnostics or pseudoanonymised clinical outcome data or research results retrieved from other users utilising the same or overlapping cohorts of samples. Furthermore, biobanks can work with molecular diagnostic laboratories to develop standardized methodologies for the acquisition and storage of samples required for new approaches to research such as ‘liquid biopsies’ which will ultimately feed into the test validations required in large prospective clinical studies in order to implement liquid biopsy approaches for routine clinical practice. We draw on our experience in Northern Ireland to discuss how this harmonised approach of biobanks working synergistically with molecular pathology programmes is key for the future success of precision medicine.
Resumo:
Stimuli that cannot be perceived (i.e., that are subliminal) can still elicit neural responses in an observer, but can such stimuli influence behavior and higher-order cognition? Empirical evidence for such effects has periodically been accepted and rejected over the last six decades. Today, many psychologists seem to consider such effects well-established and recent studies have extended the power of subliminal processing to new limits. In this thesis, I examine whether this shift in zeitgeist is matched by a shift in evidential strength for the phenomenon. This thesis consists of three empirical studies involving more than 250 participants, a simulation study, and a quantitative review. The conclusion based on these efforts is that several methodological, statistical, and theoretical issues remain in studies of subliminal processing. These issues mean that claimed subliminal effects might be caused by occasional or weak percepts (given the experimenters’ own definitions of perception) and that it is still unclear what evidence there is for the cognitive processing of subliminal stimuli. New data are presented suggesting that even in conditions traditionally claimed as “subliminal”, occasional or weak percepts may in fact influence cognitive processing more strongly than do the physical stimuli, possibly leading to reversed priming effects. I also summarize and provide methodological, statistical, and theoretical recommendations that could benefit future research aspiring to provide solid evidence for subliminal cognitive processing.
Resumo:
Recent empirical studies about the neurological executive nature of reading in bilinguals differ in their evaluations of the degree of selective manifestation in lexical access as implicated by data from early and late reading measures in the eye-tracking paradigm. Currently two scenarios are plausible: (1) Lexical access in reading is fundamentally language non-selective and top-down effects from semantic context can influence the degree of selectivity in lexical access; (2) Cross-lingual lexical activation is actuated via bottom-up processes without being affected by top-down effects from sentence context. In an attempt to test these hypotheses empirically, this study analyzed reader-text events arising when cognate facilitation and semantic constraint interact in a 22 factorially designed experiment tracking the eye movements of 26 Swedish-English bilinguals reading in their L2. Stimulus conditions consisted of high- and low-constraint sentences embedded with either a cognate or a non-cognate control word. The results showed clear signs of cognate facilitation in both early and late reading measures and in either sentence conditions. This evidence in favour of the non-selective hypothesis indicates that the manifestation of non-selective lexical access in reading is not constrained by top-down effects from semantic context.
Resumo:
Resources created at University of Southampton for the module GIS for Environmental Management (GEM)
Resumo:
La presente ricerca tratta lo studio delle basi di conoscenza, volto a facilitare la raccolta, l'organizzazione e la distribuzione della conoscenza. La scelta dell’oggetto è dovuta all'importanza sempre maggiore acquisita da questo ambito di ricerca e all'innovazione che esso è in grado di apportare nel campo del Web semantico. Viene analizzata la base di conoscenza YAGO: se ne descrivono lo stato dell’arte, le applicazioni e i progetti per sviluppi futuri. Il lavoro è stato condotto esaminando le pubblicazioni relative al tema e rappresenta una risorsa in lingua italiana sull'argomento.
Resumo:
This study provides evidence for a Stroop-like interference effect in word recognition. Based on phonologic and semantic properties of simple words, participants who performed a same/different word-recognition task exhibited a significant response latency increase when word pairs (e.g., POLL, ROD) featured a comparison word (POLL) that was a homonym of a synonym (pole) of the target word (ROD). These results support a parallel-processing framework of lexical decision making, in which activation of the pathways to word recognition may occur at different levels automatically and in parallel. A subset of simple words that are also brand names was examined and exhibited this same interference. Implications for word recognition theory and practical implications for strategic marketing are discussed.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
While news stories are an important traditional medium to broadcast and consume news, microblogging has recently emerged as a place where people can dis- cuss, disseminate, collect or report information about news. However, the massive information in the microblogosphere makes it hard for readers to keep up with these real-time updates. This is especially a problem when it comes to breaking news, where people are more eager to know “what is happening”. Therefore, this dis- sertation is intended as an exploratory effort to investigate computational methods to augment human effort when monitoring the development of breaking news on a given topic from a microblog stream by extractively summarizing the updates in a timely manner. More specifically, given an interest in a topic, either entered as a query or presented as an initial news report, a microblog temporal summarization system is proposed to filter microblog posts from a stream with three primary concerns: topical relevance, novelty, and salience. Considering the relatively high arrival rate of microblog streams, a cascade framework consisting of three stages is proposed to progressively reduce quantity of posts. For each step in the cascade, this dissertation studies methods that improve over current baselines. In the relevance filtering stage, query and document expansion techniques are applied to mitigate sparsity and vocabulary mismatch issues. The use of word embedding as a basis for filtering is also explored, using unsupervised and supervised modeling to characterize lexical and semantic similarity. In the novelty filtering stage, several statistical ways of characterizing novelty are investigated and ensemble learning techniques are used to integrate results from these diverse techniques. These results are compared with a baseline clustering approach using both standard and delay-discounted measures. In the salience filtering stage, because of the real-time prediction requirement a method of learning verb phrase usage from past relevant news reports is used in conjunction with some standard measures for characterizing writing quality. Following a Cranfield-like evaluation paradigm, this dissertation includes a se- ries of experiments to evaluate the proposed methods for each step, and for the end- to-end system. New microblog novelty and salience judgments are created, building on existing relevance judgments from the TREC Microblog track. The results point to future research directions at the intersection of social media, computational jour- nalism, information retrieval, automatic summarization, and machine learning.
Resumo:
The workshop took place on 16-17 January in Utrecht, with Seventy experts from eight European countries in attendance. The workshop was structured in six sessions: usage statistics research paper metadata exchanging information author identification Open Archives Initiative eTheses Following the workshop, the discussion groups were asked to continue their collaboration and to produce a report for circulation to all participants. The results can be downloaded below. The recommendations contained in the reports above have been reviewed by the Knowledge Exchange partner organisations and formed the basis for new proposals and the next steps in Knowledge Exchange work with institutional repositories. Institutional Repository Workshop - Next steps During April and May 2007 Knowledge Exchange had expert reviewers from the partner organisations go though the workshop strand reports and make their recommendations about the best way to move forward, to set priorities, and find possibilities for furthering the institutional repository cause. The KE partner representatives reviewed the reviews and consulted with their partner organisation management to get an indication of support and funding for the latest ideas and proposals, as follows: Pragmatic interoperability During a review meeting at JISC offices in London on 31 May, the expert reviewers and the KE partner representatives agreed that ‘pragmatic interoperability' is the primary area of interest. It was also agreed that the most relevant and beneficial choice for a Knowledge Exchange approach would be to aim for CRIS-OAR interoperability as a step towards integrated services. Within this context, interlinked joint projects could be undertaken by the partner organisations regarding the areas that most interested them. Interlinked projects The proposed Knowledge Exchange activities involve interlinked joint projects on metadata, persistent author identifiers, and eTheses which are intended to connect to and build on projects such as ISPI, Jisc NAMES and the Digital Author Identifier (DAI) developed by SURF. It is important to stress that the projects are not intended to overlap, but rather to supplement the DRIVER 2 (EU project) approaches. Focus on CRIS and OAR It is believed that the focus on practical interoperability between Current Research Information Systems and Open Access Repository systems will be of genuine benefit to research scientists, research administrators and librarian communities in the Knowledge Exchange countries; accommodating the specific needs of each group. Timing June 2007: Write the draft proposal by KE Working Group members July 2007: Final proposal to be sent to partner organisations by KE Group August 2007: Decision by Knowledge Exchange partner organisations.
Resumo:
Poster presented at Open Repositories 2016 in Dublin, Ireland, June 14, 2016
Resumo:
Increasing the size of training data in many computer vision tasks has shown to be very effective. Using large scale image datasets (e.g. ImageNet) with simple learning techniques (e.g. linear classifiers) one can achieve state-of-the-art performance in object recognition compared to sophisticated learning techniques on smaller image sets. Semantic search on visual data has become very popular. There are billions of images on the internet and the number is increasing every day. Dealing with large scale image sets is intense per se. They take a significant amount of memory that makes it impossible to process the images with complex algorithms on single CPU machines. Finding an efficient image representation can be a key to attack this problem. A representation being efficient is not enough for image understanding. It should be comprehensive and rich in carrying semantic information. In this proposal we develop an approach to computing binary codes that provide a rich and efficient image representation. We demonstrate several tasks in which binary features can be very effective. We show how binary features can speed up large scale image classification. We present learning techniques to learn the binary features from supervised image set (With different types of semantic supervision; class labels, textual descriptions). We propose several problems that are very important in finding and using efficient image representation.