807 resultados para Ontology, personalization, semantic relations, world knowledge, local instance repository, user profiles, web information gathering
Resumo:
Web-scale knowledge retrieval can be enabled by distributed information retrieval, clustering Web clients to a large-scale computing infrastructure for knowledge discovery from Web documents. Based on this infrastructure, we propose to apply semiotic (i.e., sub-syntactical) and inductive (i.e., probabilistic) methods for inferring concept associations in human knowledge. These associations can be combined to form a fuzzy (i.e.,gradual) semantic net representing a map of the knowledge in the Web. Thus, we propose to provide interactive visualizations of these cognitive concept maps to end users, who can browse and search the Web in a human-oriented, visual, and associative interface.
Resumo:
For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.
Resumo:
The thesis that entities exist in, at, or in relation to logically possible worlds is criticized. The suggestion that actually nonexistent fictional characters might nevertheless exist in nonactual merely logically possible worlds runs afoul of the most general transworld identity requirements. An influential philosophical argument for the concept of world-relativized existence is examined in Alvin Plantinga’s formal development and explanation of modal semantic relations. Despite proposing an attractive unified semantics of alethic modality, Plantinga’s argument is rejected on formal grounds as supporting materially false actual existence assertions in the case of actually nonexistent objects in the framework of Plantinga’s own underlying classical predicate-quantificational logic.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologÃas de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodologÃa de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, asà como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos especÃficos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos especÃficos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraÃdos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontologÃa de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontologÃa de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
The Answer Validation Exercise (AVE) is a pilot track within the Cross-Language Evaluation Forum (CLEF) 2006. The AVE competition provides an evaluation frame- work for answer validations in Question Answering (QA). In our participation in AVE, we propose a system that has been initially used for other task as Recognising Textual Entailment (RTE). The aim of our participation is to evaluate the improvement our system brings to QA. Moreover, due to the fact that these two task (AVE and RTE) have the same main idea, which is to find semantic implications between two fragments of text, our system has been able to be directly applied to the AVE competition. Our system is based on the representation of the texts by means of logic forms and the computation of semantic comparison between them. This comparison is carried out using two different approaches. The first one managed by a deeper study of the Word- Net relations, and the second uses the measure defined by Lin in order to compute the semantic similarity between the logic form predicates. Moreover, we have also designed a voting strategy between our system and the MLEnt system, also presented by the University of Alicante, with the aim of obtaining a joint execution of the two systems developed at the University of Alicante. Although the results obtained have not been very high, we consider that they are quite promising and this supports the fact that there is still a lot of work on researching in any kind of textual entailment.
Resumo:
"September 1985."
Resumo:
The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge. We consider a number of methods for measuring this ‘fit’ and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.
Resumo:
In the context of the needs of the Semantic Web and Knowledge Management, we consider what the requirements are of ontologies. The ontology as an artifact of knowledge representation is in danger of becoming a Chimera. We present a series of facts concerning the foundations on which automated ontology construction must build. We discuss a number of different functions that an ontology seeks to fulfill, and also a wish list of ideal functions. Our objective is to stimulate discussion as to the real requirements of ontology engineering and take the view that only a selective and restricted set of requirements will enable the beast to fly.
Resumo:
The thrust of the argument presented in this chapter is that inter-municipal cooperation (IMC) in the United Kingdom reflects local government's constitutional position and its exposure to the exigencies of Westminster (elected central government) and Whitehall (centre of the professional civil service that services central government). For the most part councils are without general powers of competence and are restricted in what they can do by Parliament. This suggests that the capacity for locally driven IMC is restricted and operates principally within a framework constructed by central government's policy objectives and legislation and the political expediencies of the governing political party. In practice, however, recent examples of IMC demonstrate that the practices are more complex than this initial analysis suggests. Central government may exert top-down pressures and impose hierarchical directives, but there are important countervailing forces. Constitutional changes in Scotland and Wales have shifted the locus of central- local relations away from Westminster and Whitehall. In England, the seeding of English government regional offices in 1994 has evolved into an important structural arrangement that encourages councils to work together. Within the local government community there is now widespread acknowledgement that to achieve the ambitious targets set by central government, councils are, by necessity, bound to cooperate and work with other agencies. In recent years, the fragmentation of public service delivery has affected the scope of IMC. Elected local government in the UK is now only one piece of a complex jigsaw of agencies that provides services to the public; whether it is with non-elected bodies, such as health authorities, public protection authorities (police and fire), voluntary nonprofit organisations or for-profit bodies, councils are expected to cooperate widely with agencies in their localities. Indeed, for projects such as regeneration and community renewal, councils may act as the coordinating agency but the success of such projects is measured by collaboration and partnership working (Davies 2002). To place these developments in context, IMC is an example of how, in spite of the fragmentation of traditional forms of government, councils work with other public service agencies and other councils through the medium of interagency partnerships, collaboration between organisations and a mixed economy of service providers. Such an analysis suggests that, following changes to the system of local government, contemporary forms of IMC are less dependent on vertical arrangements (top-down direction from central government) as they are replaced by horizontal modes (expansion of networks and partnership arrangements). Evidence suggests, however that central government continues to steer local authorities through the agency of inspectorates and regulatory bodies, and through policy initiatives, such as local strategic partnerships and local area agreements (Kelly 2006), thus questioning whether, in the case of UK local government, the shift from hierarchy to network and market solutions is less differentiated and transformation less complete than some literature suggests. Vertical or horizontal pressures may promote IMC, yet similar drivers may deter collaboration between local authorities. An example of negative vertical pressure was central government's change of the systems of local taxation during the 1980s. The new taxation regime replaced a tax on property with a tax on individual residency. Although the community charge lasted only a few years, it was a highpoint of the then Conservative government policy that encouraged councils to compete with each other on the basis of the level of local taxation. In practice, however, the complexity of local government funding in the UK rendered worthless any meaningful ambition of councils competing with each other, especially as central government granting to local authorities is predicated (however imperfectly) on at least notional equalisation between those areas with lower tax yields and the more prosperous locations. Horizontal pressures comprise factors such as planning decisions. Over the last quarter century, councils have competed on the granting of permission to out-of-town retail and leisure complexes, now recognised as detrimental to neighbouring authorities because economic forces prevail and local, independent shops are unable to compete with multiple companies. These examples illustrate tensions at the core of the UK polity of whether IMC is feasible when competition between local authorities heightened by local differences reduces opportunities for collaboration. An alternative perspective on IMC is to explore whether specific purposes or functions promote or restrict it. Whether in the principle areas of local government responsibilities relating to social welfare, development and maintenance of the local infrastructure or environmental matters, there are examples of IMC. But opportunities have diminished considerably as councils lost responsibility for services provision as a result of privatisation and transfer of powers to new government agencies or to central government. Over the last twenty years councils have lost their role in the provision of further-or higher-education, public transport and water/sewage. Councils have commissioning power but only a limited presence in providing housing needs, social care and waste management. In other words, as a result of central government policy, there are, in practice, currently far fewer opportunities for councils to cooperate. Since 1997, the New Labour government has promoted IMC through vertical drivers and the development; the operation of these policy initiatives is discussed following the framework of the editors. Current examples of IMC are notable for being driven by higher tiers of government, working with subordinate authorities in principal-agent relations. Collaboration between local authorities and intra-interand cross-sectoral partnerships are initiated by central government. In other words, IMC is shaped by hierarchical drivers from higher levels of government but, in practice, is locally varied and determined less by formula than by necessity and function. © 2007 Springer.
Resumo:
Increasingly, people's digital identities are attached to, and expressed through, their mobile devices. At the same time digital sensors pervade smart environments in which people are immersed. This paper explores different perspectives in which users' modelling features can be expressed through the information obtained by their attached personal sensors. We introduce the PreSense Ontology, which is designed to assign meaning to sensors' observations in terms of user modelling features. We believe that the Sensing Presence ( PreSense ) Ontology is a first step toward the integration of user modelling and "smart environments". In order to motivate our work we present a scenario and demonstrate how the ontology could be applied in order to enable context-sensitive services. © 2012 Springer-Verlag.
Resumo:
Increasingly, people's digital identities are attached to, and expressed through, their mobile devices. At the same time digital sensors pervade smart environments in which people are immersed. This paper explores different perspectives in which users' modelling features can be expressed through the information obtained by their attached personal sensors. We introduce the PreSense Ontology, which is designed to assign meaning to sensors' observations in terms of user modelling features. We believe that the Sensing Presence ( PreSense ) Ontology is a first step toward the integration of user modelling and "smart environments". In order to motivate our work we present a scenario and demonstrate how the ontology could be applied in order to enable context-sensitive services. © 2012 Springer-Verlag.
Resumo:
This paper proposes an ontology-based approach to representation of courseware knowledge in different domains. The focus is on a three-level semantic graph, modeling respectively the course as a whole, its structure, and domain contents itself. The authors plan to use this representation for flexibie e- learning and generation of different study plans for the learners.
Resumo:
The paper presents an approach to extraction of facts from texts of documents. This approach is based on using knowledge about the subject domain, specialized dictionary and the schemes of facts that describe fact structures taking into consideration both semantic and syntactic compatibility of elements of facts. Actually extracted facts combine into one structure the dictionary lexical objects found in the text and match them against concepts of subject domain ontology.
Resumo:
Policies and actions that come from higher scale structures, such as international bodies and national governments, are not always compatible with the realities and perspectives of smaller scale units including indigenous communities. Yet, it is at this local social-ecological scale that mechanisms and solutions for dealing with unpredictability and change can be increasingly seen emerging from across the world. Although there is a large body of knowledge specifying the conditions necessary to promote local governance of natural resources, there is a parallel need to develop practical methods for operationalizing the evaluation of local social-ecological systems. In this paper, we report on a systemic, participatory, and visual approach for engaging local communities in an exploration of their own social-ecological system. Working with indigenous communities of the North Rupununi, Guyana, this involved using participatory video and photography within a system viability framework to enable local participants to analyze their own situation by defining indicators of successful strategies that were meaningful to them. Participatory multicriteria analysis was then used to arrive at a short list of best practice strategies. We present six best practices and show how they are intimately linked through the themes of indigenous knowledge, local governance and values, and partnerships and networks. We highlight how developing shared narratives of community owned solutions can help communities to plan governance and management of land and resource systems, while reinforcing sustainable practices by discussing and showcasing them within communities, and by engendering a sense of pride in local solutions.