964 resultados para semantic interoperability
Resumo:
Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^
Resumo:
In order to facilitate subject access interoperability a mechanism must be built that allows the different controlled vocabularies to communicate meaning, relationships, and levels of extension and intension so that different user groups using different controlled vocabularies could access collections across the network. Switching languages, the tools of controlled vocabulary compatibility, consist of a single layer that does not allow for a flexible control of the semantic levels of meaning, relationships, and extension or intension. This paper proposes a multilayered conceptual framework wherein the levels of meaning, relationships and extension and intension are each controlled as individual parameters, rather than in a single switching language.
Resumo:
Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.
Resumo:
This paper contains a new proposal for the definition of the fundamental operation of query under the Adaptive Formalism, one capable of locating functional nuclei from descriptions of their semantics. To demonstrate the method`s applicability, an implementation of the query procedure constrained to a specific class of devices is shown, and its asymptotic computational complexity is discussed.
Resumo:
This paper explains and explores the concept of "semantic molecules" in the NSM methodology of semantic analysis. A semantic molecule is a complex lexical meaning which functions as an intermediate unit in the structure of other, more complex concepts. The paper undertakes an overview of different kinds of semantic molecule, showing how they enter into more complex meanings and how they themselves can be explicated. It shows that four levels of "nesting" of molecules within molecules are attested, and it argues that while some molecules such as 'hands' and 'make', may well be language-universal, many others are language-specific.
Resumo:
Ecological interface design (EID) is proving to be a promising approach to the design of interfaces for complex dynamic systems. Although the principles of EID and examples of its effective use are widely available, few readily available examples exist of how the individual displays that constitute an ecological interface are developed. This paper presents the semantic mapping process within EID in the context of prior theoretical work in this area. The semantic mapping process that was used in developing an ecological interface for the Pasteurizer II microworld is outlined, and the results of an evaluation of the ecological interface against a more conventional interface are briefly presented. Subjective reports indicate features of the ecological interface that made it particularly valuable for participants. Finally, we outline the steps of an analytic process for using EID. The findings presented here can be applied in the design of ecological interfaces or of configural displays for dynamic processes.
Resumo:
The aim of this study was to analyze semantic and episodic memory deficits in children with mesial temporal sclerosis (MTS) and their correlation with clinical epilepsy variables. For this purpose, 19 consecutive children and adolescents with MTS (8 to 16 years old) were evaluated and their performance on five episodic memory tests (short- and long-term memory and learning) and four semantic memory tests was compared with that of 28 healthy volunteers. Patients performed worse on tests of immediate and delayed verbal episodic memory, visual episodic memory, verbal and visual learning, mental scanning for semantic clues, object naming, word definition, and repetition of sentences. Clinical variables such as early age at seizure onset, severity of epilepsy, and polytherapy impaired distinct types of memory. These data confirm that children with MTS have episodic memory deficits and add new information on semantic memory. The data also demonstrate that clinical variables contribute differently to episodic and semantic memory performance. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Episodic memory impairment is a well-recognized feature of mesial temporal lobe epilepsy. Semantic memory has received much less attention in this patient population. In this study, semantic memory aspects (word-picture matching, word definition, confrontation and responsive naming, and word list generation) in 19 patients with left and right temporal lobe epilepsy secondary to mesial temporal sclerosis (MTS) were compared with those of normal controls. Patients with LMTS showed impaired performance in word definition (compared to controls and RMTS) and in responsive naming (compared to controls). RMTS and LMTS patients performed worse than controls in word-picture matching. Both patients with left and right mesial temporal lobe epilepsy performed worse than controls in word list generation and in confrontation naming tests. Attentional-executive dysfunction may have contributed to these deficits. We conclude that patients with left and right NITS display impaired aspects of semantic knowledge. A better understanding of semantic processing difficulties in these patients will provide better insight into the difficulties with activities of daily living in this patient population. (C) 2007 Elsevier Inc. All rights reserved.
From medical librarian to informacionist: semantic traces of their profiles and areas of performance
Resumo:
This literature review retakes the discussion of the profiles and competences of the information area professional, specifically, in the health field. Therefore, the aim here is to outline the new fields of performance for the informationist and the profiles required in the health context.