37 resultados para Ontology, personalization, semantic relations, world knowledge, local instance repository, user profiles, web information gathering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly, people's digital identities are attached to, and expressed through, their mobile devices. At the same time digital sensors pervade smart environments in which people are immersed. This paper explores different perspectives in which users' modelling features can be expressed through the information obtained by their attached personal sensors. We introduce the PreSense Ontology, which is designed to assign meaning to sensors' observations in terms of user modelling features. We believe that the Sensing Presence ( PreSense ) Ontology is a first step toward the integration of user modelling and "smart environments". In order to motivate our work we present a scenario and demonstrate how the ontology could be applied in order to enable context-sensitive services. 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whereas the competitive advantage of firms can arise from size and position within their industry as well as physical assets, the pattern of competition in advanced economies has increasingly come to favour those firms that can mobilise knowledge and technological skills to create novelty in their products. At the same time, regions are attracting growing attention as an economic unit of analysis, with firms increasingly locating their functions in select regions within the global space. This article introduces the concept of knowledge competitiveness, defined as an economys knowledge capacity, capability and sustainability, and the extent to which this knowledge is translated into economic value and transferred into the wealth of the citizens. The article discusses the way in which the knowledge competitiveness of regions is measured and further introduces the World Knowledge Competitiveness Index, which is the first composite and relative measure of the knowledge competitiveness of the globes best performing regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Benchmarking exercises have become increasingly popular within the sphere of regional policy making. However, most exercises are restricted to comparing regions within a particular continental bloc or nation.This article introduces the World Knowledge Competitiveness Index (WKCI), which is one of the very few benchmarking exercises established to compare regions across continents.The article discusses the formulation of the WKCI and analyzes the results of the most recent editions.The results suggest that there are significant variations in the knowledge-based regional economic development models at work across the globe. Further analysis also indicates that Silicon Valley, as the highest ranked WKCI region, holds a unique economic position among the globes leading regions. However, significant changes in the sources of regional competitiveness are evolving as a result of the emergence of new regional hot spots in Asia. It is concluded that benchmarking is imperative to the learning process of regional policy making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a new approach to designing large organizational databases. The approach emphasizes the need for a holistic approach to the design process. The development of the proposed approach was based on a comprehensive examination of the issues of relevance to the design and utilization of databases. Such issues include conceptual modelling, organization theory, and semantic theory. The conceptual modelling approach presented in this thesis is developed over three design stages, or model perspectives. In the semantic perspective, concept definitions were developed based on established semantic principles. Such definitions rely on meaning - provided by intension and extension - to determine intrinsic conceptual definitions. A tool, called meaning-based classification (MBC), is devised to classify concepts based on meaning. Concept classes are then integrated using concept definitions and a set of semantic relations which rely on concept content and form. In the application perspective, relationships are semantically defined according to the application environment. Relationship definitions include explicit relationship properties and constraints. The organization perspective introduces a new set of relations specifically developed to maintain conformity of conceptual abstractions with the nature of information abstractions implied by user requirements throughout the organization. Such relations are based on the stratification of work hierarchies, defined elsewhere in the thesis. Finally, an example of an application of the proposed approach is presented to illustrate the applicability and practicality of the modelling approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the key challenges that organizations face when trying to integrate knowledge across different functions is the need to overcome knowledge boundaries between team members. In cross-functional teams, these boundaries, associated with different knowledge backgrounds of people from various disciplines, create communication problems, necessitating team members to engage in complex cognitive processes when integrating knowledge toward a joint outcome. This research investigates the impact of syntactic, semantic, and pragmatic knowledge boundaries on a teams ability to develop a transactive memory system (TMS)a collective memory system for knowledge coordination in groups. Results from our survey show that syntactic and pragmatic knowledge boundaries negatively affect TMS development. These findings extend TMS theory beyond the information-processing view, which treats knowledge as an object that can be stored and retrieved, to the interpretive and practice-based views of knowledge, which recognize that knowledge (in particular specialized knowledge) is localized, situated, and embedded in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While semantic search technologies have been proven to work well in specific domains, they still have to confront two main challenges to scale up to the Web in its entirety. In this work we address this issue with a novel semantic search system that a) provides the user with the capability to query Semantic Web information using natural language, by means of an ontology-based Question Answering (QA) system [14] and b) complements the specific answers retrieved during the QA process with a ranked list of documents from the Web [3]. Our results show that ontology-based semantic search capabilities can be used to complement and enhance keyword search technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most existing approaches to Twitter sentiment analysis assume that sentiment is explicitly expressed through affective words. Nevertheless, sentiment is often implicitly expressed via latent semantic relations, patterns and dependencies among words in tweets. In this paper, we propose a novel approach that automatically captures patterns of words of similar contextual semantics and sentiment in tweets. Unlike previous work on sentiment pattern extraction, our proposed approach does not rely on external and fixed sets of syntactical templates/patterns, nor requires deep analyses of the syntactic structure of sentences in tweets. We evaluate our approach with tweet- and entity-level sentiment analysis tasks by using the extracted semantic patterns as classification features in both tasks. We use 9 Twitter datasets in our evaluation and compare the performance of our patterns against 6 state-of-the-art baselines. Results show that our patterns consistently outperform all other baselines on all datasets by 2.19% at the tweet-level and 7.5% at the entity-level in average F-measure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was concerned with the computer automation of land evaluation. This is a broad subject with many issues to be resolved, so the study concentrated on three key problems: knowledge based programming; the integration of spatial information from remote sensing and other sources; and the inclusion of socio-economic information into the land evaluation analysis. Land evaluation and land use planning were considered in the context of overseas projects in the developing world. Knowledge based systems were found to provide significant advantages over conventional programming techniques for some aspects of the land evaluation process. Declarative languages, in particular Prolog, were ideally suited to integration of social information which changes with every situation. Rule-based expert system shells were also found to be suitable for this role, including knowledge acquisition at the interview stage. All the expert system shells examined suffered from very limited constraints to problem size, but new products now overcome this. Inductive expert system shells were useful as a guide to knowledge gaps and possible relationships, but the number of examples required was unrealistic for typical land use planning situations. The accuracy of classified satellite imagery was significantly enhanced by integrating spatial information on soil distribution for Thailand data. Estimates of the rice producing area were substantially improved (30% change in area) by the addition of soil information. Image processing work on Mozambique showed that satellite remote sensing was a useful tool in stratifying vegetation cover at provincial level to identify key development areas, but its full utility could not be realised on typical planning projects, without treatment as part of a complete spatial information system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-agent systems are complex systems comprised of multiple intelligent agents that act either independently or in cooperation with one another. Agent-based modelling is a method for studying complex systems like economies, societies, ecologies etc. Due to their complexity, very often mathematical analysis is limited in its ability to analyse such systems. In this case, agent-based modelling offers a practical, constructive method of analysis. The objective of this book is to shed light on some emergent properties of multi-agent systems. The authors focus their investigation on the effect of knowledge exchange on the convergence of complex, multi-agent systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In product reviews, it is observed that the distribution of polarity ratings over reviews written by different users or evaluated based on different products are often skewed in the real world. As such, incorporating user and product information would be helpful for the task of sentiment classification of reviews. However, existing approaches ignored the temporal nature of reviews posted by the same user or evaluated on the same product. We argue that the temporal relations of reviews might be potentially useful for learning user and product embedding and thus propose employing a sequence model to embed these temporal relations into user and product representations so as to improve the performance of document-level sentiment analysis. Specifically, we first learn a distributed representation of each review by a one-dimensional convolutional neural network. Then, taking these representations as pretrained vectors, we use a recurrent neural network with gated recurrent units to learn distributed representations of users and products. Finally, we feed the user, product and review representations into a machine learning classifier for sentiment classification. Our approach has been evaluated on three large-scale review datasets from the IMDB and Yelp. Experimental results show that: (1) sequence modeling for the purposes of distributed user and product representation learning can improve the performance of document-level sentiment classification; (2) the proposed approach achieves state-of-The-Art results on these benchmark datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is XML-centric, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During group meetings it is often difficult for participants to effectively: share their knowledge to inform the outcome; acquire new knowledge from others to broaden and/or deepen their understanding; utilise all available knowledge to design an outcome; and record (to retain) the rationale behind the outcome to inform future activities. These are difficult because, for example: only one person can share knowledge at once which challenges effective sharing; information overload makes acquisition problematic and can marginalize important knowledge; and intense dialog of conflicting views makes recording more complex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In less than a decade, personal computers have become part of our daily lives. Many of us come into contact with computers every day, whether at work, school or home. As useful as the new technologies are, they also have a darker side. By making computers part of our daily lives, we run the risk of allowing thieves, swindlers, and all kinds of deviants directly into our homes. Armed with a personal computer, a modem and just a little knowledge, a thief can easily access confidential information, such as details of bank accounts and credit cards. This book helps people avoid harm at the hands of Internet criminals. It offers a tour of the more dangerous parts of the Internet, as the author explains who the predators are, their motivations, how they operate and how to protect against them. In less than a decade, personal computers have become part of our daily lives. Many of us come into contact with computers every day, whether at work, school or home. As useful as the new technologies are, they also have a darker side. By making computers part of our daily lives, we run the risk of allowing thieves, swindlers, and all kinds of deviants directly into our homes. Armed with a personal computer, a modem and just a little knowledge, a thief can easily access confidential information, such as details of bank accounts and credit cards. This book is intended to help people avoid harm at the hands of Internet criminals. It offers a tour of the more dangerous parts of the Internet, as the author explains who the predators are, their motivations, how they operate and how to protect against them. Behind the doors of our own homes, we assume we are safe from predators, con artists, and other criminals wishing us harm. But the proliferation of personal computers and the growth of the Internet have invited these unsavory types right into our family rooms. With a little psychological knowledge a con man can start to manipulate us in different ways. A terrorist can recruit new members and raise money over the Internet. Identity thieves can gather personal information and exploit it for criminal purposes. Spammers can wreak havoc on businesses and individuals. Here, an expert helps readers recognize the signs of a would-be criminal in their midst. Focusing on the perpetrators, the author provides information about how they operate, why they do it, what they hope to do, and how to protect yourself from becoming a victim.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/.