483 resultados para grafana,SEPA,Plugin,RDF,SPARQL
Resumo:
Maintaining object-oriented systems that use inheritance and polymorphism is difficult, since runtime information, such as which methods are actually invoked at a call site, is not visible in the static source code. We have implemented Senseo, an Eclipse plugin enhancing Eclipse's static source views with various dynamic metrics, such as runtime types, the number of objects created, or the amount of memory allocated in particular methods.
Resumo:
Type III secretion systems of Gram-negative bacteria are specific export machineries for virulence factors which allow their translocation to eukaryotic cells. Since they correlate with bacterial pathogenicity, their presence is used as a general indicator of bacterial virulence. By comparing the genetic relationship of the major type III secretion systems we found the family of genes encoding the inner-membrane channel proteins represented by the Yersinia enterocolitica lcrD (synonym yscV) and its homologous genes from other species an ideal component for establishing a general detection approach for type III secretion systems. Based on the genes of the lcrD family we developed gene probes for Gram-negative human, animal and plant pathogens. The probes comprise lcrD from Y. enterocolitica, sepA from enteropathogenic Escherichia coli, invA from Salmonella typhimurium, mxiA from Shigella sonnei, as well as hrcV from Erwinia amylovora. In addition we included as a control probe the flhA gene from E. coli K-12 to validate our approach. FlhA is part of the flagellar export apparatus which shows a high degree of similarity with type III secretions systems, but is not involved in pathogenicity. The probes were evaluated by screening a series of pathogenic as well as non-pathogenic bacteria. The probes detected type III secretion in pathogens where such systems were either known or were expected to be present, whereas no positive hybridization signals could be found in non-pathogenic Gram-negative bacteria. Gram-positive bacteria were devoid of known type III secretion systems. No interference due to the genetic similarity between the type III secretion system and the flagellar export apparatus was observed. However, potential type III secretion systems could be detected in bacteria where no such systems have been described yet. The presented approach provides therefore a useful tool for the assessment of the virulence potential of bacterial isolates of human, animal and plant origin. Moreover, it is a powerful means for a first safety assessment of poorly characterized strains intended to be used in biotechnological applications.
Resumo:
Video-basiertes Lernen ist besonders effektiv, wo es um Fertigkeiten und Verhalten geht. Videoaufzeichnungen von Gesprächen, Unterrichtssituationen oder der Durchführung praktischer Tätigkeiten wie dem Nähen einer Wunde erlauben es den Ausführenden, ihren Peers und ihren Tutoren, die Qualität der Leistung zu beurteilen und Anregungen zur Verbesserung zu formulieren. Wissend um den grossen didaktischen Wert von Videoaufzeichnungen haben sich vier Pädagogische Hochschulen (Zürich, Freiburg, Thurgau, Luzern) und zwei Medizinische Fakultäten (Bern, Lausanne) zusammen getan, um eine nationale Infrastruktur für Video-unterstütztes Lernen anzustossen. Ziel was es, ein System zu entwickeln, das einfach zu bedienen ist, bei dem viele Arbeitsschritte automatisiert sind und das die Videos im Internet bereit stellt. Zusammen mit SWITCH, der nationalen IT-Support-Organisation der Schweizer Hochschulen, wurde basierend auf den vorbestehenden Technologien AAI und SWITCHcast das Programm iVT (Individual Video Training) entwickelt. Die Integration des nationalen Single Logon System AAI (Authentification and Authorization Infrastructure) erlaubt es, die Videos mit dem jeweiligen User eindeutig zu verknüpfen, so dass die Videos nur für diesen User im Internet zugänglich sind. Mit dem Podcast-System SWITCHcast können Videos automatisch ins Internet hochgeladen und bereit gestellt werden. Es wurden je ein Plugin für die Learning Management Systeme ILIAS (PH Zürich, Uni Bern) und Moodle (Uni Lausanne) entwickelt. Dank dieser Plugins werden die Videos in den jeweiligen LMS verfügbar gemacht. Der Einsatz von iVT ist beim Kommunikationstraining unserer Medizinstudierenden in Bern inzwischen Standard. Das Login gilt gleichzeitig als Beleg für das Testat. Studierende, die keine Videoaufzeichnung wünschen, können diese nach dem Login stoppen. Bis anhin ist das Betrachten der Videos freiwillig. Szenarios mit Peer Feedback sind geplant. Eine entsprechende Erweiterung des Systems um gegenseitige Annotationsmöglichkeiten besteht bereits und wird fortlaufend weiterentwickelt.
Resumo:
MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
Text in hebr. und jidd., in hebr. Schr.
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
Este libro busca que el lector identifique e interprete conocimientos referidos a números reales y a sus operaciones, reconociendo los algoritmos y los procedimientos relacionados vinculándolos al cálculo de distintas medidas; que comprenda y sepa resolver problemas, seleccionando el tipo de razonamiento o argumentación que requiera la situación; que emplee sistemas de ecuaciones e inecuaciones para modelizar y resolver situaciones reales del entorno cotidiano; y que identifique, defina, grafique, describe e interprete distintos tipos de funciones asociándolas a situaciones reales. Se editó como material de aprendizaje destinado al personal de seguridad pública de la Provincia de Mendoza en el marco del proyecto pedagógico con modalidad a distancia para la terminalidad de estudios de EGB3 y Educación Polimodal –EDITEP–, implementado a partir de la firma del Convenio entre la Universidad Nacional de Cuyo y el Gobierno de la Provincia de Mendoza, en octubre de 2003.
Resumo:
El presente trabajo se inscribe a partir de la experiencia vivida en las prácticas pre-profesionales llevadas a cabo en OSEP (Obra Social de Empleados Públicos) dentro del Programa “Cuidar, Atención del dolor y Cuidados paliativos", el cual a través de un equipo interdisciplinario especializado y capacitado en cuidados paliativos, se encarga de brindar atención a pacientes adultos con dolor crónico de cualquier origen y con enfermedades oncológicas, neurológicas evolutivas o degenerativas en etapas avanzadas. El gran desarrollo y la tecnificación alcanzada por la Medicina en los últimos años y su obsesión por curar, ha ocasionado que no sepa actuar con enfermos en los que no hay posibilidades curativas. Para comenzar y acorde al relevamiento bibliográfico realizado en el trayecto de las prácticas, se ha visualizado que si bien coexisten criterios homogéneos en cuanto al tratamiento y abordaje profesional del paciente con una enfermedad terminal que se encuentra institucionalizado, no se hallan así muchas experiencias sobre intervención profesional domiciliaria, con familias en crisis por la presencia de una enfermedad terminal en uno de sus miembros.
Resumo:
La interoperabilidad entre distintos sistemas de organización del conocimiento (SOC) ha cobrado gran importancia en los últimos tiempos, con el propósito de facilitar la búsqueda simultánea en varias bases de datos o fusionar distintas bases de datos en una sola. Las nuevas normas para el diseño y desarrollo de SOC, la estadounidense Z39.19:2005 y la británica BS 8723-4:2007, incluyen recomendaciones detalladas para la interoperabilidad. También se encuentra en preparación una nueva norma ISO 25964-1 sobre tesauros e interoperabilidad que se agregará a las anteriores. La tecnología disponible proporciona herramientas para este fin, como son los formatos y requisitos funcionales de autoridades y las herramientas de la Web Semántica RDF/OWL, SKOS Core y XML. Actualmente es difícil diseñar y desarrollar nuevos SOC debido a los problemas económicos, de modo que la interoperabilidad hace posible aprovechar los SOC existentes. En este trabajo se revisan los conceptos, modelos y métodos recomendados por las normas, así como numerosas experiencias de interoperabilidad entre SOC que han sido documentadas.
Resumo:
La interoperabilidad entre distintos sistemas de organización del conocimiento (SOC) ha cobrado gran importancia en los últimos tiempos, con el propósito de facilitar la búsqueda simultánea en varias bases de datos o fusionar distintas bases de datos en una sola. Las nuevas normas para el diseño y desarrollo de SOC, la estadounidense Z39.19:2005 y la británica BS 8723-4:2007, incluyen recomendaciones detalladas para la interoperabilidad. También se encuentra en preparación una nueva norma ISO 25964-1 sobre tesauros e interoperabilidad que se agregará a las anteriores. La tecnología disponible proporciona herramientas para este fin, como son los formatos y requisitos funcionales de autoridades y las herramientas de la Web Semántica RDF/OWL, SKOS Core y XML. Actualmente es difícil diseñar y desarrollar nuevos SOC debido a los problemas económicos, de modo que la interoperabilidad hace posible aprovechar los SOC existentes. En este trabajo se revisan los conceptos, modelos y métodos recomendados por las normas, así como numerosas experiencias de interoperabilidad entre SOC que han sido documentadas.
Resumo:
La interoperabilidad entre distintos sistemas de organización del conocimiento (SOC) ha cobrado gran importancia en los últimos tiempos, con el propósito de facilitar la búsqueda simultánea en varias bases de datos o fusionar distintas bases de datos en una sola. Las nuevas normas para el diseño y desarrollo de SOC, la estadounidense Z39.19:2005 y la británica BS 8723-4:2007, incluyen recomendaciones detalladas para la interoperabilidad. También se encuentra en preparación una nueva norma ISO 25964-1 sobre tesauros e interoperabilidad que se agregará a las anteriores. La tecnología disponible proporciona herramientas para este fin, como son los formatos y requisitos funcionales de autoridades y las herramientas de la Web Semántica RDF/OWL, SKOS Core y XML. Actualmente es difícil diseñar y desarrollar nuevos SOC debido a los problemas económicos, de modo que la interoperabilidad hace posible aprovechar los SOC existentes. En este trabajo se revisan los conceptos, modelos y métodos recomendados por las normas, así como numerosas experiencias de interoperabilidad entre SOC que han sido documentadas.
Resumo:
The Spanish National Library (Biblioteca Nacional de España1. BNE) and the Ontology Engineering Group2 of Universidad Politécnica de Madrid are working on the joint project ?Preliminary Study of Linked Data?, whose aim is to enrich the Web of Data with the BNE authority and bibliographic records. To this end, they are transforming the BNE information to RDF following the Linked Data principles3 proposed by Tim Berners Lee.
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based