439 resultados para Ontologies


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This seminar is a research discussion around a very interesting problem, which may be a good basis for a WAISfest theme. A little over a year ago Professor Alan Dix came to tell us of his plans for a magnificent adventure:to walk all of the way round Wales - 1000 miles 'Alan Walks Wales'. The walk was a personal journey, but also a technological and community one, exploring the needs of the walker and the people along the way. Whilst walking he recorded his thoughts in an audio diary, took lots of photos, wrote a blog and collected data from the tech instruments he was wearing. As a result Alan has extensive quantitative data (bio-sensing and location) and qualitative data (text, images and some audio). There are challenges in analysing individual kinds of data, including merging similar data streams, entity identification, time-series and textual data mining, dealing with provenance, ontologies for paths, and journeys. There are also challenges for author and third-party annotation, linking the data-sets and visualising the merged narrative or facets of it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT In the first two seminars we looked at the evolution of Ontologies from the current OWL level towards more powerful/expressive models and the corresponding hierarchy of Logics that underpin every stage of this evolution. We examined this in the more general context of the general evolution of the Web as a mathematical (directed and weighed) graph and the archetypical “living network” In the third seminar we will analyze further some of the startling properties that the Web has as a graph/network and which it shares with an array of “real-life” networks as well as some key elements of the mathematics (probability, statistics and graph theory) that underpin all this. No mathematical prerequisites are assumed or required. We will outline some directions that current (2005-now) research is taking and conclude with some illustrations/examples from ongoing research and applications that show great promise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT In the first two seminars we looked at the evolution of Ontologies from the current OWL level towards more powerful/expressive models and the corresponding hierarchy of Logics that underpin every stage of this evolution. We examined this in the more general context of the general evolution of the Web as a mathematical (directed and weighed) graph and the archetypical “living network” In the third seminar we will analyze further some of the startling properties that the Web has as a graph/network and which it shares with an array of “real-life” networks as well as some key elements of the mathematics (probability, statistics and graph theory) that underpin all this. No mathematical prerequisites are assumed or required. We will outline some directions that current (2005-now) research is taking and conclude with some illustrations/examples from ongoing research and applications that show great promise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT In the first two seminars we looked at the evolution of Ontologies from the current OWL level towards more powerful/expressive models and the corresponding hierarchy of Logics that underpin every stage of this evolution. We examined this in the more general context of the general evolution of the Web as a mathematical (directed and weighed) graph and the archetypical “living network” In the third seminar we will analyze further some of the startling properties that the Web has as a graph/network and which it shares with an array of “real-life” networks as well as some key elements of the mathematics (probability, statistics and graph theory) that underpin all this. No mathematical prerequisites are assumed or required. We will outline some directions that current (2005-now) research is taking and conclude with some illustrations/examples from ongoing research and applications that show great promise.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract A frequent assumption in Social Media is that its open nature leads to a representative view of the world. In this talk we want to consider bias occurring in the Social Web. We will consider a case study of liquid feedback, a direct democracy platform of the German pirate party as well as models of (non-)discriminating systems. As a conclusion of this talk we stipulate the need of Social Media systems to bias their working according to social norms and to publish the bias they introduce. Speaker Biography: Prof Steffen Staab Steffen studied in Erlangen (Germany), Philadelphia (USA) and Freiburg (Germany) computer science and computational linguistics. Afterwards he worked as researcher at Uni. Stuttgart/Fraunhofer and Univ. Karlsruhe, before he became professor in Koblenz (Germany). Since March 2015 he also holds a chair for Web and Computer Science at Univ. of Southampton sharing his time between here and Koblenz. In his research career he has managed to avoid almost all good advice that he now gives to his team members. Such advise includes focusing on research (vs. company) or concentrating on only one or two research areas (vs. considering ontologies, semantic web, social web, data engineering, text mining, peer-to-peer, multimedia, HCI, services, software modelling and programming and some more). Though, actually, improving how we understand and use text and data is a good common denominator for a lot of Steffen's professional activities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La configuración de lugares como áreas de protección ambiental puede ser vista como un proceso técnico y objetivo, en el que se crean políticas públicas que definen prácticas adecuadas e inadecuadas en el lugar. Pero esta configuración es un proceso histórico y negociado. Este se construye en contante diálogo entre diferentes actores que se preocupan por definir qué es la naturaleza y el cuidado ambiental, y las percepciones que individuos que habitan en o cerca a estos lugares construyen en su diario vivir. Es así como la configuración socioambiental de lugares como áreas de protección ocurre por transformaciones en la forma de percibir un lugar, la relaciones con este y sobre todo, prácticas y relaciones que se traducen en formas de negociar nociones de naturaleza y cuidado ambiental. Esta negociación tiene grandes implicaciones en los individuos, particularmente en su subjetividad. Es decir, en hechos como la forma de nombrarlo, caminarlo, observar las especies, iniciar proyectos de agricultura orgánica, cambiar prácticas productivas, el cerramiento de zonas para proteger las fuentes de agua o zonas de vegetación. También sobre su subjetividad, la manera como se sienten frente al lugar, como juzgan sus acciones y las de otros y cómo construyen objetivos personales con respecto a la idea de cuidado ambiental.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El Monismo Anómalo que Donald Davidson postuló como posible derrotero a propósito de los problemas latentes en filosofía de la mente contemporánea ha sido víctima de serias críticas, en especial aquellas planteadas por Jaegwon Kim. Sin embargo, una lectura que incluya las referencia que Davidson hizo del proyecto que Kant desarrolló en la Tercera Antinomia de la Razón Pura puede servir para ofrecer una lectura a favor de su proyecto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El treball desenvolupat en aquesta tesi presenta un profund estudi i proveïx solucions innovadores en el camp dels sistemes recomanadors. Els mètodes que usen aquests sistemes per a realitzar les recomanacions, mètodes com el Filtrat Basat en Continguts (FBC), el Filtrat Col·laboratiu (FC) i el Filtrat Basat en Coneixement (FBC), requereixen informació dels usuaris per a predir les preferències per certs productes. Aquesta informació pot ser demogràfica (Gènere, edat, adreça, etc), o avaluacions donades sobre algun producte que van comprar en el passat o informació sobre els seus interessos. Existeixen dues formes d'obtenir aquesta informació: els usuaris ofereixen explícitament aquesta informació o el sistema pot adquirir la informació implícita disponible en les transaccions o historial de recerca dels usuaris. Per exemple, el sistema recomanador de pel·lícules MovieLens (http://movielens.umn.edu/login) demana als usuaris que avaluïn almenys 15 pel·lícules dintre d'una escala de * a * * * * * (horrible, ...., ha de ser vista). El sistema genera recomanacions sobre la base d'aquestes avaluacions. Quan els usuaris no estan registrat en el sistema i aquest no té informació d'ells, alguns sistemes realitzen les recomanacions tenint en compte l'historial de navegació. Amazon.com (http://www.amazon.com) realitza les recomanacions tenint en compte les recerques que un usuari a fet o recomana el producte més venut. No obstant això, aquests sistemes pateixen de certa falta d'informació. Aquest problema és generalment resolt amb l'adquisició d'informació addicional, se li pregunta als usuaris sobre els seus interessos o es cerca aquesta informació en fonts addicionals. La solució proposada en aquesta tesi és buscar aquesta informació en diverses fonts, específicament aquelles que contenen informació implícita sobre les preferències dels usuaris. Aquestes fonts poden ser estructurades com les bases de dades amb informació de compres o poden ser no estructurades com les pàgines web on els usuaris deixen la seva opinió sobre algun producte que van comprar o posseïxen. Nosaltres trobem tres problemes fonamentals per a aconseguir aquest objectiu: 1 . La identificació de fonts amb informació idònia per als sistemes recomanadors. 2 . La definició de criteris que permetin la comparança i selecció de les fonts més idònies. 3 . La recuperació d'informació de fonts no estructurades. En aquest sentit, en la tesi proposada s'ha desenvolupat: 1 . Una metodologia que permet la identificació i selecció de les fonts més idònies. Criteris basats en les característiques de les fonts i una mesura de confiança han estat utilitzats per a resoldre el problema de la identificació i selecció de les fonts. 2 . Un mecanisme per a recuperar la informació no estructurada dels usuaris disponible en la web. Tècniques de Text Mining i ontologies s'han utilitzat per a extreure informació i estructurar-la apropiadament perquè la utilitzin els recomanadors. Les contribucions del treball desenvolupat en aquesta tesi doctoral són: 1. Definició d'un conjunt de característiques per a classificar fonts rellevants per als sistemes recomanadors 2. Desenvolupament d'una mesura de rellevància de les fonts calculada sobre la base de les característiques definides 3. Aplicació d'una mesura de confiança per a obtenir les fonts més fiables. La confiança es definida des de la perspectiva de millora de la recomanació, una font fiable és aquella que permet millorar les recomanacions. 4. Desenvolupament d'un algorisme per a seleccionar, des d'un conjunt de fonts possibles, les més rellevants i fiable utilitzant les mitjanes esmentades en els punts previs. 5. Definició d'una ontologia per a estructurar la informació sobre les preferències dels usuaris que estan disponibles en Internet. 6. Creació d'un procés de mapatge que extreu automàticament informació de les preferències dels usuaris disponibles en la web i posa aquesta informació dintre de l'ontologia. Aquestes contribucions permeten aconseguir dos objectius importants: 1 . Millorament de les recomanacions usant fonts d'informació alternatives que sigui rellevants i fiables. 2 . Obtenir informació implícita dels usuaris disponible en Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competency management is a very important part of a well-functioning organisation. Unfortunately competency descriptions are not uniformly specified nor defined across borders: National, sectorial or organisational, leading to an opaque competency description market with a multitude of competency frameworks and competency benchmarks. An ontology is a formalised description of a domain, which enables automated reasoning engines to be built which by utilising the interrelations between entities can make “intelligent” choices in different situations within the domain. Introducing formalised competency ontologies automated tools, such as skill gap analysis, training suggestion generation, job search and recruitment, can be developed, which compare and contrast different competency descriptions on the semantic level. The major problem with defining a common formalised ontology for competencies is that there are so many viewpoints of competencies and competency frameworks. Work within the TRACE project has focused on finding common trends within different competency frameworks in order to allow an intermediate competency description to be made, which other frameworks can reference. This research has shown that competencies can be divided up into “knowledge”, “skills” and what we call “others”. An ontology has been created based on this with a simple structure of different “kinds” of “knowledges” and “skills” using semantic interrelations to define the basic semantic structure of the ontology. A prototype tool for analysing a skill gap analysis has been developed. Personal profiles can be produced using the tool and a skill gap analysis is performed on a desired competency profile by using an ontologically based inference engine, which is able to list closest fit and possible proficiency gaps

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we discus the current methods for semi and fully automatic construction and their current shortcomings. In particular we pay attention the application of ontologies to products and the particle application of the ontologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is problematic to use standard ontology tools when describing vague domains. Standard ontologies are designed to formally define one view of a domain, and although it is possible to define disagreeing statements, it is not advisable, as the resulting inferences could be incorrect. Two different solutions to the above problem in two different vague domains have been developed and are presented. The first domain is the knowledge base of conversational agents (chatbots). An ontological scripting language has been designed to access ontology data from within chatbot code. The solution developed is based on reifications of user statements. It enables a new layer of logics based on the different views of the users, enabling the body of knowledge to grow automatically. The second domain is competencies and competency frameworks. An ontological framework has been developed to model different competencies using the emergent standards. It enables comparison of competencies using a mix of linguistic logics and descriptive logics. The comparison results are non-binary, therefore not simple yes and no answers, highlighting the vague nature of the comparisons. The solution has been developed with small ontologies which can be added to and modified in order for the competency user to build a total picture that fits the user’s purpose. Finally these two approaches are viewed in the light of how they could aid future work in vague domains, further work in both domains is described and also in other domains such as the semantic web. This demonstrates two different approaches to achieve inferences using standard ontology tools in vague domains.