61 resultados para Semantic package


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The W3C Semantic Sensor Network Incubator group (the SSN-XG) produced an OWL 2 ontology to describe sensors and observations ? the SSN ontology, available at http://purl.oclc.org/NET/ssnx/ssn. The SSN ontology can describe sensors in terms of capabilities, measurement processes, observations and deployments. This article describes the SSN ontology. It further gives an example and describes the use of the ontology in recent research projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LifeWear-Mobilized Lifestyle with Wearables (Lifewear) project attempts to create Ambient Intelligence (AmI) ecosystems by composing personalized services based on the user information, environmental conditions and reasoning outputs. Two of the most important benefits over traditional environments are 1) take advantage of wearable devices to get user information in a nonintrusive way and 2) integrate this information with other intelligent services and environmental sensors. This paper proposes a new ontology composed by the integration of users and services information, for semantically representing this information. Using an Enterprise Service Bus, this ontology is integrated in a semantic middleware to provide context-aware personalized and semantically annotated services, with discovery, composition and orchestration tasks. We show how these services support a real scenario proposed in the Lifewear project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work describes a semantic extension for a user-smart object interaction model based on the ECA paradigm (Event-Condition-Action). In this approach, smart objects publish their sensing (event) and action capabilities in the cloud and mobile devices are prepared to retrieve them and act as mediators to configure personalized behaviours for the objects. In this paper, the information handled by this interaction system has been shaped according several semantic models that, together with the integration of an embedded ontological and rule-based reasoner, are exploited in order to (i) automatically detect incompatible ECA rules configurations and to (ii) support complex ECA rules definitions and execution. This semantic extension may significantly improve the management of smart spaces populated with numerous smart objects from mobile personal devices, as it facilitates the configuration of coherent ECA rules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many attempts have been made to provide multilinguality to the Semantic Web, by means of annotation properties in Natural Language (NL), such as RDFs or SKOS labels, and other lexicon-ontology models, such as lemon, but there are still many issues to be solved if we want to have a truly accessible Multilingual Semantic Web (MSW). Reusability of monolingual resources (ontologies, lexicons, etc.), accessibility of multilingual resources hindered by many formats, reliability of ontological sources, disambiguation problems and multilingual presentation to the end user of all this information in NL can be mentioned as some of the most relevant problems. Unless this NL presentation is achieved, MSW will be restricted to the limits of IT experts, but even so, with great dissatisfaction and disenchantment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensor network deployments have become a primary source of big data about the real world that surrounds us, measuring a wide range of physical properties in real time. With such large amounts of heterogeneous data, a key challenge is to describe and annotate sensor data with high-level metadata, using and extending models, for instance with ontologies. However, to automate this task there is a need for enriching the sensor metadata using the actual observed measurements and extracting useful meta-information from them. This paper proposes a novel approach of characterization and extraction of semantic metadata through the analysis of sensor data raw observations. This approach consists in using approximations to represent the raw sensor measurements, based on distributions of the observation slopes, building a classi?cation scheme to automatically infer sensor metadata like the type of observed property, integrating the semantic analysis results with existing sensor networks metadata.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When users face a certain problem needing a product, service, or action to solve it, selecting the best alternative among them can be a dicult task due to the uncertainty of their quality. This is especially the case in the domains where users do not have an expertise, like for example in Software Engineering. Multiple criteria decision making (MCDM) methods are methods that help making better decisions when facing the complex problem of selecting the best solution among a group of alternatives that can be compared according to different conflicting criteria. In MCDM problems, alternatives represent concrete products, services or actions that will help in achieving a goal, while criteria represent the characteristics of these alternatives that are important for making a decision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the main goals and outcomes of the EU-funded Framework 7 project entitled Semantic Evaluation at Large Scale (SEALS). The growth and success of the Semantic Web is built upon a wide range of Semantic technologies from ontology engineering tools through to semantic web service discovery and semantic search. The evaluation of such technologies ? and, indeed, assessments of their mutual compatibility ? is critical for their sustained improvement and adoption. The SEALS project is creating an open and sustainable platform on which all aspects of an evaluation can be hosted and executed and has been designed to accommodate most technology types. It is envisaged that the platform will become the de facto repository of test datasets and will allow anyone to organise, execute and store the results of technology evaluations free of charge and without corporate bias. The demonstration will show how individual tools can be prepared for evaluation, uploaded to the platform, evaluated according to some criteria and the subsequent results viewed. In addition, the demonstration will show the flexibility and power of the SEALS Platform for evaluation organisers by highlighting some of the key technologies used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows the influence of the semantic content of urban sounds in the subjective evaluation of outer spaces. The study is based on the analysis conducted in three neighboring and integrated urban spaces with a different form of social ownership in the city of Cordoba, Argentina. It shows that the type of sound source present at each site influence, by its semantic content, in the user´s identification and permanence in the place. The noise present in a soundscape is able to have a high semantic content, and therefore the sound has a particular meaning for the perceiver. Every particular social group influences the production of their own sounds and how they perceive them. This allows to consider the sound as one of the factors that define the sense of "place" or "no place" of a certain urban space. Evidently the sounds, and their ability to evoke and characterize the environment, cannot be ignored in the construction and recovery of anthropological sites. This urban culture is unique and specific to every society. Thepublic spaces, with their soundscape, are part of the construction of the urban identity of a city. It is shown that for identical general sound levels present in each of the spaces, the level of annoyance or discomfort, in relation to the subjective acoustic quality, is different. This is the result of the influence of semantic content of the sounds present in each urban space. Coinciding with other similar research, the level of discomfort or annoyance decreases as the presence of natural sounds such as water, the wind in the trees or the birds singing increases, even when the objective values of noise level of natural sounds are higher.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The overall objective of this research project is to enrich geographic data with temporal and semantic components in order to significantly improve spatio-temporal analysis of geographic phenomena. To achieve this goal, we intend to establish and incorporate three new layers (structures) into the core of the Geographic Information by using mark-up languages as well as defining a set of methods and tools for enriching the system to make it able to retrieve and exploit such layers (semantic-temporal, geosemantic, and incremental spatio-temporal). Besides these layers, we also propose a set of models (temporal and spatial) and two semantic engines that make the most of the enriched geographic data. The roots of the project and its definition have been previously presented in Siabato & Manso-Callejo 2011. In this new position paper, we extend such work by delineating clearly the methodology and the foundations on which we will base to define the main components of this research: the spatial model, the temporal model, the semantic layers, and the semantic engines. By putting together the former paper and this new work we try to present a comprehensive description of the whole process, from pinpointing the basic problem to describing and assessing the solution. In this new article we just mention the methods and the background to describe how we intend to define the components and integrate them into the GI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The creation of language resources is a time-consuming process requiring the efforts of many people. The use of resources collaboratively created by non-linguists can potentially ameliorate this situation. However, such resources often contain more errors compared to resources created by experts. For the particular case of lexica, we analyse the case of Wiktionary, a resource created along wiki principles and argue that through the use of a principled lexicon model, namely lemon, the resulting data could be better understandable to machines. We then present a platform called lemon source that supports the creation of linked lexical data along the lemon model. This tool builds on the concept of a semantic wiki to enable collaborative editing of the resources by many users concurrently. In this paper, we describe the model, the tool and present an evaluation of its usability based on a small group of users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a methodology for developing a speech into sign language translation system considering a user-centered strategy. This method-ology consists of four main steps: analysis of technical and user requirements, data collection, technology adaptation to the new domain, and finally, evalua-tion of the system. The two most demanding tasks are the sign generation and the translation rules generation. Many other aspects can be updated automatical-ly from a parallel corpus that includes sentences (in Spanish and LSE: Lengua de Signos Española) related to the application domain. In this paper, we explain how to apply this methodology in order to develop two translation systems in two specific domains: bus transport information and hotel reception.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interoperability between semantic technologies is a must because they need to be in communication to interchange ontologies and use them in the distributed and open environment of the SemanticWeb. However, such interoperability is not straightforward due to the high heterogeneity in such technologies. This chapter describes the problem of semantic technology interoperability from two different perspectives. First, from a theoretical perspective by presenting an overview of the different factors that affect interoperability and, second, from a practical perspective by reusing evaluation methods and applying them to six current semantic technologies in order to assess their interoperability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.