939 resultados para Open Information Extraction
Resumo:
This research aims to diachronically analyze the worldwide scientific production on open access, in the academic and scientific context, in order to contribute to knowledge and visualization of its main actors. As a method, bibliographical, descriptive and analytical research was used, with the contribution of bibliometric studies, especially the production indicators, scientific collaboration and indicators of thematic co-occurrence. The Scopus database was used as a source to retrieve the articles on the subject, with a resulting corpus of 1179 articles. Using Bibexcel software, frequency tables were constructed for the variables, and Pajek software was used to visualize the collaboration network and VoSViewer for the construction of the keywords' network. As for the results, the most productive researchers come from countries such as the United States, Canada, France and Spain. Journals with higher impact in the academic community have disseminated the new constructed knowledge. A collaborative network with a few subnets where co-authors are from different countries has been observed. As conclusions, this study allows identifying the themes of debates that mark the development of open access at the international level, and it is possible to state that open access is one of the new emerging and frontier fields of library and information science
Resumo:
The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.
Resumo:
The focus of this chapter is to study feature extraction and pattern classification methods from two medical areas, Stabilometry and Electroencephalography (EEG). Stabilometry is the branch of medicine responsible for examining balance in human beings. Balance and dizziness disorders are probably two of the most common illnesses that physicians have to deal with. In Stabilometry, the key nuggets of information in a time series signal are concentrated within definite time periods are known as events. In this chapter, two feature extraction schemes have been developed to identify and characterise the events in Stabilometry and EEG signals. Based on these extracted features, an Adaptive Fuzzy Inference Neural network has been applied for classification of Stabilometry and EEG signals.
Resumo:
Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.
Resumo:
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Resumo:
The present work aims to assess Laser-Induced Plasma Spectrometry (LIPS) as a tool for the characterization of photovoltaic materials. Despite being a well-established technique with applications to many scientific and industrial fields, so far LIPS is little known to the photovoltaic scientific community. The technique allows the rapid characterization of layered samples without sample preparation, in open atmosphere and in real time. In this paper, we assess LIPS ability for the determination of elements that are difficult to analyze by other broadly used techniques, or for producing analytical information from very low-concentration elements. The results of the LIPS characterization of two different samples are presented: 1) a 90 nm, Al-doped ZnO layer deposited on a Si substrate by RF sputtering and 2) a Te-doped GaInP layer grown on GaAs by Metalorganic Vapor Phase Epitaxy. For both cases, the depth profile of the constituent and dopant elements is reported along with details of the experimental setup and the optimization of key parameters. It is remarkable that the longest time of analysis was ∼10 s, what, in conjunction with the other characteristics mentioned, makes of LIPS an appealing technique for rapid screening or quality control whether at the lab or at the production line.
Resumo:
Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.
Resumo:
The worldwide "hyper-connection" of any object around us is the challenge that promises to cover the paradigm of the Internet of Things. If the Internet has colonized the daily life of more than 2000 million1 people around the globe, the Internet of Things faces of connecting more than 100000 million2 "things" by 2020. The underlying Internet of Things’ technologies are the cornerstone that promises to solve interrelated global problems such as exponential population growth, energy management in cities, and environmental sustainability in the average and long term. On the one hand, this Project has the goal of knowledge acquisition about prototyping technologies available in the market for the Internet of Things. On the other hand, the Project focuses on the development of a system for devices management within a Wireless Sensor and Actuator Network to offer some services accessible from the Internet. To accomplish the objectives, the Project will begin with a detailed analysis of various “open source” hardware platforms to encourage creative development of applications, and automatically extract information from the environment around them for transmission to external systems. In addition, web platforms that enable mass storage with the philosophy of the Internet of Things will be studied. The project will culminate in the proposal and specification of a service-oriented software architecture for embedded systems that allows communication between devices on the network, and the data transmission to external systems. Furthermore, it abstracts the complexities of hardware to application developers. RESUMEN. La “hiper-conexión” a nivel mundial de cualquier objeto que nos rodea es el desafío al que promete dar cobertura el paradigma de la Internet de las Cosas. Si la Internet ha colonizado el día a día de más de 2000 millones1 de personas en todo el planeta, la Internet de las Cosas plantea el reto de conectar a más de 100000 millones2 de “cosas” para el año 2020. Las tecnologías subyacentes de la Internet de las Cosas son la piedra angular que prometen dar solución a problemas globales interrelacionados como el crecimiento exponencial de la población, la gestión de la energía en las ciudades o la sostenibilidad del medioambiente a largo plazo. Este Proyecto Fin de Carrera tiene como principales objetivos por un lado, la adquisición de conocimientos acerca de las tecnologías para prototipos disponibles en el mercado para la Internet de las Cosas, y por otro lado el desarrollo de un sistema para la gestión de dispositivos de una red inalámbrica de sensores que ofrezcan unos servicios accesibles desde la Internet. Con el fin de abordar los objetivos marcados, el proyecto comenzará con un análisis detallado de varias plataformas hardware de tipo “open source” que estimulen el desarrollo creativo de aplicaciones y que permitan extraer de forma automática información del medio que les rodea para transmitirlo a sistemas externos para su posterior procesamiento. Por otro lado, se estudiarán plataformas web identificadas con la filosofía de la Internet de las Cosas que permitan el almacenamiento masivo de datos que diferentes plataformas hardware transfieren a través de la Internet. El Proyecto culminará con la propuesta y la especificación una arquitectura software orientada a servicios para sistemas empotrados que permita la comunicación entre los dispositivos de la red y la transmisión de datos a sistemas externos, así como facilitar el desarrollo de aplicaciones a los programadores mediante la abstracción de la complejidad del hardware.
Resumo:
Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other type of object information may be useful. As a matter of fact, it may give additional knowledge about its subjective properties. Some of these properties are object symmetry, parallelism between lines and the feeling of size. These types of properties concerns more to internal sensations of living beings when they are related with their environment than to the objective information obtained by artificial systems. This paper presents an elemental system able to detect some of the above-mentioned parameters. A first mathematical model to analyze these situations is reported. This theoretical model will give the possibility to implement a simple working system. The basis of this system is the use of optical logic cells, previously employed in optical computing.
Resumo:
Acquired brain injury (ABI) 1-2 refers to any brain damage occurring after birth. It usually causes certain damage to portions of the brain. ABI may result in a significant impairment of an individuals physical, cognitive and/or psychosocial functioning. The main causes are traumatic brain injury (TBI), cerebrovascular accident (CVA) and brain tumors. The main consequence of ABI is a dramatic change in the individuals daily life. This change involves a disruption of the family, a loss of future income capacity and an increase of lifetime cost. One of the main challenges in neurorehabilitation is to obtain a dysfunctional profile of each patient in order to personalize the treatment. This paper proposes a system to generate a patient s dysfunctional profile by integrating theoretical, structural and neuropsychological information on a 3D brain imaging-based model. The main goal of this dysfunctional profile is to help therapists design the most suitable treatment for each patient. At the same time, the results obtained are a source of clinical evidence to improve the accuracy and quality of our rehabilitation system. Figure 1 shows the diagram of the system. This system is composed of four main modules: image-based extraction of parameters, theoretical modeling, classification and co-registration and visualization module.
Resumo:
La web ha sufrido una drástica transformación en los últimos años, debido principalmente a su popularización y a la enorme cantidad de información que alberga. Debido a estos factores se ha dado el salto de la denominada Web de Documentos, a la Web Semántica, donde toda la información está relacionada con otra. Las principales ventajas de la información enlazada estriban en la facilidad de reutilización, accesibilidad y disponibilidad para ser encontrada por el usuario. En este trabajo se pretende poner de manifiesto la utilidad de los datos enlazados aplicados al ámbito geográfico y mostrar como pueden ser empleados hoy en día. Para ello se han explotado datos enlazados de carácter espacial provenientes de diferentes fuentes, a través de servidores externos o endpoints SPARQL. Además de eso se ha trabajado con un servidor privado capaz de proporcionar información enlazada almacenada en un equipo personal. La explotación de información enlazada se ha implementado en una aplicación web en lenguaje JavaScript, tratando de abstraer totalmente al usuario del tratamiento de los datos a nivel interno de la aplicación. Esta aplicación cuenta además con algunos módulos y opciones capaces de interactuar con las consultas realizadas a los servidores, consiguiendo un entorno más intuitivo y agradable para el usuario. ABSTRACT: In recent years the web has suffered a drastic transformation because of the popularization and the huge amount of stored information. Due to these factors it has gone from Documents web to Semantic web, where the data are linked. The main advantages of Linked Data lie in the ease of his reuse, accessibility and availability to be located by users. The aim of this research is to highlight the usefulness of the geographic linked data and show how can be used at present time. To get this, the spatial linked data coming from several sources have been managed through external servers or also called endpoints. Besides, it has been worked with a private server able to provide linked data stored in a personal computer. The use of linked data has been implemented in a JavaScript web application, trying completely to abstract the internally data treatment of the application to make the user ignore it. This application has some modules and options that are able to interact with the queries made to the servers, getting a more intuitive and kind environment for users.
Resumo:
La idea de este proyecto es acercar la imagen de Libertad de Información y su conocida variante Open Source, donde cubriremos en detalle la multitud de puntos que abarca. Está dirigida a todos los usuarios que quieran conocer de primera mano cómo se inició la idea de Libertad Tecnológica hasta sus aplicaciones. No solo para aquellos que quieran emplearla, sino tambien para aquellos que la ya la usan y necesitan recursos para nuevas ideas. De esta forma, nos acercaremos tambien a la idea de libertad que en la tecnología está actualmente en debate. El contenido se estructura siguiendo las siguientes ramas: Historia, desde sus orígenes hasta el presente. Economía, ventajas y desventajas de esta libertad. Problemas legales en distintos niveles Noticias y actualizaciones de aplicaciones. Sociedad, entorno a la aceptación y rechazo por los usuarios, ademas de su influencia en la ética, educación e innovación. Aplicaciones, donde se incluirán la mayoría de las aplicaciones más conocidas en cada una de las ramas de Open Source. ---ABSTRACT---The topic finally chosen in the list of Professional Skills and Issues has been the Freedom of Information and its best known variant Open Source. We will try to cover in detail most of the points that includes history, economics, law, society and the various applications in which it have influenced. It allows all the public to see first-hand the term of Open Source, from its beginnings to applications. Not just for those who want to use it, but for those who already use it and want to find sources and new ideas. It will also get a step closer to the idea of Freedom of Information as currently being debated. The main branches are going to address: History, from its origins to the present. Economic, advantages and disadvantages of being free. Laws, problems in different continents at the legal level. News, latest in its various applications. Society, acceptance or rejection by the people, addition to the factors that influence as ethics, education, and arts innovation. Applications, where most try to include most current applications of each of the variants.
Resumo:
Este trabajo, «Una aproximación a Ia integración en Open Data de los recursos Inspire de Ia IDEE », tiene por objetivo el construir un puente entre las Infraestructuras de Datos Espaciales (IDE) y el mundo de los «datos abiertos » aprovechando el marco legal de la Reutilización de la Información del Sector Público (RISP). Tras analizar qué es RISP y en particular los datos abiertos, y cómo se implementa en distintas Administraciones, se estudian los requisitos técnicos y legales necesarios para construir el «traductor» que permita canalizar la información IDE en el portal central de reutilización de información español datos.gob.es, dando una mayor visibilidad a los recursos INSPIRE. El trabajo se centra específicamente en dos puntos: en primer lugar en proporcionar y documentar la solución técnica que sirva en primera instancia para que el Instituto Geográfico Nacional aporte con más eficiencia sus recursos a datos.gob.es. En segundo lugar, a estudiar la aplicabilidad de esta misma solución al ámbito de la IDE de España (IDEE), señalando problemas detectados en el análisis de su contenido y sugiriendo recomendaciones para minimizar los problemas de su potencial reutilización. ABSTRACT: This work titled «Analysis of the integration of INSPIRE resources coming from Spanish Spatial Data Infrastructure within the National Public Sector Information portal», aims to build a bridge between the Spatial Data Infrastructures (SDI ) and the world of "Open Data" taking advantage of the legal framework on the Re-use of Public Sector Information (PSI) . After analyzing what PSI reuse and Open Data is and how it is implemented by different administrations, a study to extract the technical and legal requirements is done to build the "translator" that will allow adding SDI resources within the Spanish portal for the PSI reuse data .gob.es while giving greater visibility to INSPIRE. This document specifically focuses on two aspects: first to provide and document the technical solution that serves primarily for the National Geographic Institute to supply more efficiently its resources to datos.gob.es. Secondly, to study the applicability of the proposed solution to the whole Spanish SDI (IDEE), noting identified problems and suggesting recommendations to minimize problems of its potential reuse.
Resumo:
Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.