187 resultados para Reusing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apart from providing semantics and reasoning power to data, ontologies enable and facilitate interoperability across heterogeneous systems or environments. A good practice when developing ontologies is to reuse as much knowledge as possible in order to increase interoperability by reducing heterogeneity across models and to reduce development effort. Ontology registries, indexes and catalogues facilitate the task of finding, exploring and reusing ontologies by collecting them from different sources. This paper presents an ontology catalogue for the smart cities and related domains. This catalogue is based on curated metadata and incorporates ontology evaluation features. Such catalogue represents the first approach within this community and it would be highly useful for new ontology developments or for describing and annotating existing ontologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scientific workflows provide the means to define, execute and reproduce computational experiments. However, reusing existing workflows still poses challenges for workflow designers. Workflows are often too large and too specific to reuse in their entirety, so reuse is more likely to happen for fragments of workflows. These fragments may be identified manually by users as sub-workflows, or detected automatically. In this paper we present the FragFlow approach, which detects workflow fragments automatically by analyzing existing workflow corpora with graph mining algorithms. FragFlow detects the most common workflow fragments, links them to the original workflows and visualizes them. We evaluate our approach by comparing FragFlow results against user-defined sub-workflows from three different corpora of the LONI Pipeline system. Based on this evaluation, we discuss how automated workflow fragment detection could facilitate workflow reuse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stream-mining approach is defined as a set of cutting-edge techniques designed to process streams of data in real time, in order to extract knowledge. In the particular case of classification, stream-mining has to adapt its behaviour to the volatile underlying data distributions, what has been called concept drift. Moreover, it is important to note that concept drift may lead to situations where predictive models become invalid and have therefore to be updated to represent the actual concepts that data poses. In this context, there is a specific type of concept drift, known as recurrent concept drift, where the concepts represented by data have already appeared in the past. In those cases the learning process could be saved or at least minimized by applying a previously trained model. This could be extremely useful in ubiquitous environments that are characterized by the existence of resource constrained devices. To deal with the aforementioned scenario, meta-models can be used in the process of enhancing the drift detection mechanisms used by data stream algorithms, by representing and predicting when the change will occur. There are some real-world situations where a concept reappears, as in the case of intrusion detection systems (IDS), where the same incidents or an adaptation of them usually reappear over time. In these environments the early prediction of drift by means of a better knowledge of past models can help to anticipate to the change, thus improving efficiency of the model regarding the training instances needed. By means of using meta-models as a recurrent drift detection mechanism, the ability to share concepts representations among different data mining processes is open. That kind of exchanges could improve the accuracy of the resultant local model as such model may benefit from patterns similar to the local concept that were observed in other scenarios, but not yet locally. This would also improve the efficiency of training instances used during the classification process, as long as the exchange of models would aid in the application of already trained recurrent models, that have been previously seen by any of the collaborative devices. Which it is to say that the scope of recurrence detection and representation is broaden. In fact the detection, representation and exchange of concept drift patterns would be extremely useful for the law enforcement activities fighting against cyber crime. Being the information exchange one of the main pillars of cooperation, national units would benefit from the experience and knowledge gained by third parties. Moreover, in the specific scope of critical infrastructures protection it is crucial to count with information exchange mechanisms, both from a strategical and technical scope. The exchange of concept drift detection schemes in cyber security environments would aid in the process of preventing, detecting and effectively responding to threads in cyber space. Furthermore, as a complement of meta-models, a mechanism to assess the similarity between classification models is also needed when dealing with recurrent concepts. In this context, when reusing a previously trained model a rough comparison between concepts is usually made, applying boolean logic. The introduction of fuzzy logic comparisons between models could lead to a better efficient reuse of previously seen concepts, by applying not just equal models, but also similar ones. This work faces the aforementioned open issues by means of: the MMPRec system, that integrates a meta-model mechanism and a fuzzy similarity function; a collaborative environment to share meta-models between different devices; a recurrent drift generator that allows to test the usefulness of recurrent drift systems, as it is the case of MMPRec. Moreover, this thesis presents an experimental validation of the proposed contributions using synthetic and real datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Durante el siglo XXI hemos sido testigos de cambios con una gran trascendencia en el campo de las tecnologías tanto a nivel de hardware como software, aunque uno de los más notables ha sido el cambio del paradigma de la distribución del software, donde la instalación de herramientas de escritorio queda relegada a un segundo plano y toman fuerza las aplicaciones que consumen servicios web o que, simplemente, son aplicaciones web, que no requieren de un proceso de instalación y siempre que tengamos una conexión a internet activa podremos acceder a nuestra aplicación y datos, sin importar desde donde nos conectemos. Gracias a este cambio, últimamente han proliferado distintas tecnologías para la creación de aplicaciones web, entre estas encontramos los componentes web basados en tecnología Polymer como herramienta para el desarrollo de aplicaciones modulares y componentes reutilizables en distintos sitios web, modificando y añadiendo funcionalidad a las etiquetas de HTML, de esta manera una vez desarrollado un componente, volver a utilizarlo es realizar un trabajo de unos cuantos segundo añadiendo la etiqueta necesaria en nuestro código HTML, esta ventaja es la principal característica de Polymer. En paralelo al desarrollo de tecnologías web, y gracias a su masificación, se han generado herramientas y frameworks a través de los cuales se pueden desarrollar aplicaciones para dispositivos móviles mediante tecnologías web, esto beneficia directamente a los ecosistemas de desarrolladores, herramientas, frameworks y aplicaciones ya que los hace más amplios y accesibles a todo aquel que sea capaz de programar una aplicación web basada en HTML, CSS y Javascript. El objetivo de este trabajo es generar un canal de movilidad definiendo una metodología eficaz para portar las ventajas de los componentes web de Polymer a entornos móviles, conservando su capacidad de ser reutilizados de manera sencilla y sin perder, dentro de lo posible, la usabilidad de los mismos teniendo en cuenta las particularidades de los dispositivos móviles, esto se realizará mediante pruebas de usabilidad para posteriormente validar la metodología generada aplicándola a un caso real.---ABSTRACT---During 21st century we have witness the important changes in technologies field, involving both hardware and software level, but one of the most relevant ones has been the software distribution paradigm change, where desktop tools has lost their importance to benefit web services or just web applications, among which the web components are included. Web components are based on Polymer technology as its main tool for developing modular applications and reusable components in different web sites, adding and modifying functionality to HTML tags. So, when a components is developed, reusing it is possible just adding its correspondant tag inour HTML code. This is the main Polymer feature. As web technologies grow, different tools and frameworks has been created. They can be used to develop applications for web devices though web technologies, which is a benefit for developer, tools, frameworks and applications ecosystems, in such a way this new tools make them wider and more accessible for every one able to develop web applications with HTML, CSS and Javascript languages. The goal of this work is to generate a mobility channel defining an efficient methodology to carry the Polymer web components advantages to mobile environments, keeping their features of being reused in an easy way and without losing, when possible, their usability being aware the special features of mobile devices. This work will be evaluated through usability tests to validate then the generated methodology applying it to a real case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A prática da reutilização de produtos médico-hospitalares de uso único vem sendo aplicada desde meados da década de setenta. A principal razão que tem contribuído para disseminação desta conduta pelas instituições hospitalares radicadas tanto nos países em desenvolvimento como naqueles considerados ricos, tem sido a aparente economia de custos. Apesar dos riscos relacionados com a prática da reutilização, como reações pirogênicas, danos ocasionados por bactérias consideradas patogênicas em pacientes imunologicamente comprometidos, danos na integridade fisica dos produtos, assim como aumento do período de permanência dos pacientes no hospital, têm despertado o interesse em avaliar aspectos fisicos e biológicos dos produtos médico-hospitalares reutilizados. Baseando-se nestas considerações foram aplicados desafios com esporos de Bacillus Subtilis varo niger ATCC 9372 e endotoxina bacteriana E. coli 055:B5. Os produtos desafiados foram cateteres intravenosos, torneira três vias e tubos de traqueostomia. A possível presença microbiana foi investigada após contaminação intencional dos esporos de B. Subtillis (107 ufc/unid.) com submissão das unidades contaminadas à limpeza e posterior esterilização, utilizando óxido de etileno/CFC na proporção 12:88. Os ciclos de reprocessamentos simulados de produtos médico-hospitalares consistiram de contaminação de cada unidade teste com carga microbiana, lavagem com detergente enzimático, secagem e esterilização. Ao término de cada ciclo de reprocessamento foram separadas unidades representativas para avaliação por contagem microbiana (pour plate), testes de esterilidade por inoculação direta e indireta, citotoxidade por cultura de células e microscopia eletrônica de varredura. A eficiência da esterilidade foi avaliada tanto por contagem microbiana como pelos testes de esterilidade, que resultaram em níveis microbianos de 103 ufc/unid. e detecção de contaminação até o 6° ciclo de reprocessamento nos cateteres intravenosos, tubos de traqueostomia e torneiras três vias. A segurança dos reprocessamentos dos produtos médico-hospitalares foi avaliada pela cultura de células de fibroblastos de camundongo (NCTC clone 929), as quais não apresentaram toxicidade. Entretanto, os resultados obtidos durante microscopia eletrônica de varredura comprovaram presença de carga microbiana após 10° ciclo de reprocessamento, assim como danos na superficie polimérica. Durante desafio com endotoxina bacteriana, que consistiu em contaminar as unidades com 200 UE, secagem e exposição ao ciclo de esterilização com óxido de etileno/CFC (12:88), verificou-se que após ciclos de reprocessamentos simulados, totalizando dez ciclos, foi possível detectar valores de recuperação de endotoxina em torno de 100%. Os cateteres-guia que foram adquiridos em instituição hospitalar após quatro reutilizações, apresentaram níveis de contaminação de 105 ufc/unid., assim como presença de bactérias consideradas patogênicas em pacientes comprometidos imunologicamente, já a detecção de endotoxina bacteriana nestes cateteres não foi considerada significativa. Logo, as avaliações aplicadas nas unidades submetidas aos ciclos de reprocessamentos simulados, assim como nos cateteres-guia reprocessados e reutilizados quatro vezes, refletiram a realidade de algumas instituições no âmbito nacional e internacional que praticam a reutilização de produtos médico-hospitalares de uso-único. Os resultados obtidos vêm enfatizar objeções quanto à prática da reutilização, considerando que a ausência de segurança pode ocasionar em danos ao paciente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uno de los problemas actuales en el dominio de la salud es reutilizar y compartir la información clínica entre profesionales, ya que ésta se encuentra escrita usando terminologías específicas. Una posible solución es usar un recurso de conocimiento común sobre el que mapear la información existente. Nuestro objetivo es comprobar si la adición de conocimiento semántico superficial puede mejorar los mapeados establecidos. Para ello experimentamos con un conjunto de etiquetas de NANDA-I y con un conjunto de descripciones de SNOMED-CT en castellano. Los resultados obtenidos en los experimentos muestran que la inclusión de conocimiento semántico superficial mejora significativamente el mapeado léxico entre los dos recursos estudiados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel procedure for the preparation of solid Pd(II)-based catalysts consisting of the anchorage of designed Pd(II)-complexes on an activated carbon (AC) surface is reported. Two molecules of the Ar–S–F type (where Ar is a plane-pyrimidine moiety, F a Pd(II)-ligand and S an aliphatic linker) differing in F, were grafted on AC by π–π stacking of the Ar moiety and the graphene planes of the AC, thus favouring the retaining of the metal-complexing ability of F. Adsorption of Pd(II) by the AC/Ar–S–F hybrids occurs via Pd(II)-complexation by F. After deep characterization, the catalytic activities of the AC/Ar–S–F/Pd(II) hybrids on the hydrogenation of 1-octene in methanol as a catalytic test were evaluated. 100% conversion to n-octane at T = 323.1 K and P = 15 bar, was obtained with both catalysts and most of Pd(II) was reduced to Pd(0) nanoparticles, which remained on the AC surface. Reusing the catalysts in three additional cycles reveals that the catalyst bearing the F ligand with a larger Pd-complexing ability showed no loss of activity (100% conversion to n-octane) which is assigned to its larger structural stability. The catalyst with the weaker F ligand underwent a progressive loss of activity (from 100% to 79% in four cycles), due to the constant aggregation of the Pd(0) nanoparticles. Milder conditions, T = 303.1 K and P = 1.5 bar, prevent the aggregation of the Pd(0) nanoparticles in this catalyst allowing the retention of the high catalytic efficiency (100% conversion) in four reaction cycles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Industrial Material Exchange Service (IMES) program is a free service designed to provide a mechanism for recycling and reusing unwanted materials. The exchange program maintains and distributes listings of materials both wanted and available provided by our participants. Through IMES, waste generators can be matched with waste users. Any material, either non-hazardous or hazardous that is available from one business yet has potential reuse by another, can be a part of the exchange. IMES functions as an information clearinghouse for industrial by-products, surplus materials, waste and other forms of unwanted industrial materials. The goal of the IMES program is to conserve energy, resources and landfill space by helping find alternatives to disposal of what might be a valuable material.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proof reuse, or analogical reasoning, involves reusing the proof of a source theorem in the proof of a target conjecture. We have developed a method for proof reuse that is based on the generalisation replay paradigm described in the literature, in which a generalisation of the source proof is replayed to construct the target proof. In this paper, we describe the novel aspects of our method, which include a technique for producing more accurate source proof generalisations (using knowledge of the target goal), as well as a flexible replay strategy that allows the user to set various parameters to control the size and the shape of the search space. Finally, we report on the results of applying this method to a case study from the realm of software verification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Achieving consistency between a specification and its implementation is an important part of software development. In this paper, we present a method for generating passive test oracles that act as self-checking implementations. The implementation is verified using an animation tool to check that the behavior of the implementation matches the behavior of the specification. We discuss how to integrate this method into a framework developed for systematically animating specifications, which means a tester can significantly reduce testing time and effort by reusing work products from the animation. One such work product is a testgraph: a directed graph that partially models the states and transitions of the specification. Testgraphs are used to generate sequences for animation, and during testing, to execute these same sequences on the implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. In this paper, a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open user-friendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the existing work on information integration in the Semantic Web concentrates on resolving schema-level problems. Specific issues of data-level integration (instance coreferencing, conflict resolution, handling uncertainty) are usually tackled by applying the same techniques as for ontology schema matching or by reusing the solutions produced in the database domain. However, data structured according to OWL ontologies has its specific features: e.g., the classes are organized into a hierarchy, the properties are inherited, data constraints differ from those defined by database schema. This paper describes how these features are exploited in our architecture KnoFuss, designed to support data-level integration of semantic annotations.