808 resultados para Information extraction strategies
Resumo:
The enterprises are in the midst of a competitive and open market, in which, rapidly, new enterprises are created, international competitors are installed on the local market and products and services are invented or improved to ensure quality, sophistication and low cost. In this scenario, the familial enterprises seek survival through new information and strategies to solve existing conflicts and overcome the challenges of the globalized market. However, resistance to change is a factor common to more traditional familial enterprise culture, therefore, modify solid structures, built over many years, reflects insecurity, fragility and threats facing the different. This project aims to analyze the brazilian familial enterprise, in particular the enterprise Móveis Zacarias, as its historical trajectory, representativeness, economic importance, concept, structure, culture and problems that are peculiar, for example, problems with succession, management, professionalism and communication. In addition to demonstrating the importance of Public Relations professional in mediating conflicts in family businesses. In addition to demonstrating the importance of PR, that when using any of the communication tools, can mediate and facilitate the relationship between members of the family business, and maintain both systems, leading to cooperation between business and family through preventive actions
Resumo:
We are included in a society where the use of the Internet became very important to our everyday life. The relationships nowadays usually happen through technological devices instead of face to face contact, for instance, Internet forums where people can discuss online. However, the global analysis is a big challenge, due to the large amount of data. This work investigates the use of visual representations to support an exploratory analysis of contents in messages from discussions forums. This analysis considers the thematic and the chronology. The target forums refer to the educational area and the analysis happens manually, i.e. by direct reading message-by-message. The proprieties of perception and cognition of the human visual system allow a person the capacity to conduct high-level tasks in information extraction from a graphical or visual representation of data. Therefore, this work was based on Visual Analytics, an area that aims create techniques that amplify these human abilities. For that reason we used software that creates a visualization of data from a forum. This software allows a forum content analysis. But, during the work, we identified the necessity to create a new tool to clean the data, because the data had a lot of unnecessary information. After cleaning the data we created a new visualization and held an analysis seeking a new knowledge. In the end we compared the new visualization with the manual analysis that had been made. Analyzing the results, it was evident the potential of visualization use, it provides a better correlation between the information, enabling the acquisition of new knowledge that was not identified in the initial analysis, providing a better use of the forum content
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This theoretical proposal applies evolutionary aesthetic, animal signalling and sexual selection to understand our artistic cognition, especially rock art aesthetics. Iconographic motifs, universally found in rock art, indicate which set of pre-artistic aesthetic psychological bias has been co-opted to catch the viewer`s attention. The co-evolutionary process of sexual selection could have shaped the design features of both rock art images and their aesthetic cognition by conferring mutual benefits on both producers, via manipulation, and receivers, via information extraction. We show some strategic techniques identified in rock art and art that indicate the occurrence of this co-evolution between producers and receivers.
Resumo:
In this manuscript, an automatic setup for screening of microcystins in surface waters by employing photometric detection is described. Microcystins are toxins delivered by cyanobacteria within an aquatic environment, which have been considered strongly poisonous for humans. For that reason, the World Health Organization (WHO) has proposed a provisional guideline value for drinking water of 1 mu g L-1. In this work, we developed an automated equipment setup, which allows the screening of water for concentration of microcystins below 0.1 mu g V. The photometric method was based on the enzyme-linked immunosorbent assay (ELISA) and the analytical signal was monitored at 458 nm using a homemade LED-based photometer. The proposed system was employed for the detection of microcystins in rivers and lakes waters. Accuracy was assessed by processing samples using a reference method and applying the paired t-test between results. No significant difference at the 95% confidence level was observed. Other useful features including a linear response ranging from 0.05 up to 2.00 mu g L-1 (R-2 =0.999) and a detection limit of 0.03 mu g L-1 microcystins were achieved. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.
Resumo:
In the context of the Semantic Web, natural language descriptions associated with ontologies have proven to be of major importance not only to support ontology developers and adopters, but also to assist in tasks such as ontology mapping, information extraction, or natural language generation. In the state-of-the-art we find some attempts to provide guidelines for URI local names in English, and also some disagreement on the use of URIs for describing ontology elements. When trying to extrapolate these ideas to a multilingual scenario, some of these approaches fail to provide a valid solution. On the basis of some real experiences in the translation of ontologies from English into Spanish, we provide a preliminary set of guidelines for naming and labeling ontologies in a multilingual scenario.
Resumo:
This paper introduces a semantic language developed with the objective to be used in a semantic analyzer based on linguistic and world knowledge. Linguistic knowledge is provided by a Combinatorial Dictionary and several sets of rules. Extra-linguistic information is stored in an Ontology. The meaning of the text is represented by means of a series of RDF-type triples of the form predicate (subject, object). Semantic analyzer is one of the options of the multifunctional ETAP-3 linguistic processor. The analyzer can be used for Information Extraction and Question Answering. We describe semantic representation of expressions that provide an assessment of the number of objects involved and/or give a quantitative evaluation of different types of attributes. We focus on the following aspects: 1) parametric and non-parametric attributes; 2) gradable and non-gradable attributes; 3) ontological representation of different classes of attributes; 4) absolute and relative quantitative assessment; 5) punctual and interval quantitative assessment; 6) intervals with precise and fuzzy boundaries
Resumo:
The scientific method is a methodological approach to the process of inquiry { in which empirically grounded theory of nature is constructed and verified [14]. It is a hard, exhaustive and dedicated multi-stage procedure that a researcher must perform to achieve valuable knowledge. Trying to help researchers during this process, a recommender system, intended as a researcher assistant, is designed to provide them useful tools and information for each stage of the procedure. A new similarity measure between research objects and a representational model, based on domain spaces, to handle them in dif ferent levels are created as well as a system to build them from OAI-PMH (and RSS) resources. It tries to represents a sound balance between scientific insight into individual scientific creative processes and technical implementation using innovative technologies in information extraction, document summarization and semantic analysis at a large scale.
Resumo:
La nanotecnología es el estudio que la mayoría de veces es tomada como una meta tecnológica que nos ayuda en el área de investigación para tratar con la manipulación y el control en forma precisa de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. Recordando que el prefijo nano proviene del griego vavoc que significa enano y corresponde a un factor de 10^-9, que aplicada a las unidades de longitud corresponde a una mil millonésima parte de un metro. Ahora sabemos que esta ciencia permite trabajar con estructuras moleculares y sus átomos, obteniendo materiales que exhiben fenómenos físicos, químicos y biológicos, muy distintos a los que manifiestan los materiales usados con una longitud mayor. Por ejemplo en medicina, los compuestos manométricos y los materiales nano estructurados muchas veces ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, ya que muchas veces llegan a combinar los antiguos compuestos con estos nuevos para crear nuevas terapias e inclusive han llegado a reemplazarlos, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales y, por tanto, cualquier flujo de trabajo en nano medicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Muchos investigadores en la nanotecnología están buscando la manera de obtener información acerca de estos materiales nanométricos, para mejorar sus estudios que muchas veces lleva a probar estos métodos o crear nuevos compuestos para ayudar a la medicina actual, contra las enfermedades más poderosas como el cáncer. Pero en estos días es muy difícil encontrar una herramienta que les brinde la información específica que buscan en los miles de ensayos clínicos que se suben diariamente en la web. Actualmente, la informática biomédica trata de proporcionar el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, en este contexto, la nueva área de la nano informática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Otro caso en la actualidad es que muchos investigadores de biomedicina desean saber y comparar la información dentro de los ensayos clínicos que contiene temas de nanotecnología en las diferentes paginas en la web por todo el mundo, obteniendo en si ensayos clínicos que se han creado en Norte América, y ensayos clínicos que se han creado en Europa, y saber si en este tiempo este campo realmente está siendo explotado en los dos continentes. El problema es que no se ha creado una herramienta que estime un valor aproximado para saber los porcentajes del total de ensayos clínicos que se han creado en estas páginas web. En esta tesis de fin de máster, el autor utiliza un mejorado pre-procesamiento de texto y un algoritmo que fue determinado como el mejor procesamiento de texto en una tesis doctoral, que incluyo algunas pruebas con muchos de estos para obtener una estimación cercana que ayudaba a diferenciar cuando un ensayo clínico contiene información sobre nanotecnología y cuando no. En otras palabras aplicar un análisis de la literatura científica y de los registros de ensayos clínicos disponibles en los dos continentes para extraer información relevante sobre experimentos y resultados en nano medicina (patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.), seguido el mecanismo de procesamiento para estructurar y analizar dicha información automáticamente. Este análisis concluye con la estimación antes mencionada necesaria para comparar la cantidad de estudios sobre nanotecnología en estos dos continentes. Obviamente usamos un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento anotados manualmente—, y el conjunto de datos para el test es toda la base de datos de estos registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nano drogas, nano dispositivos y nano métodos de aquellos enfocados a testear productos farmacéuticos tradicionales.---ABSTRACT---Nanotechnology is the scientific study that usually is seen as a technological goal that helps us in the investigation field to deal with the manipulation and precise control of the matter with dimensions that range from 1 to 100 nanometers. Remembering that the prefix nano comes from the Greek word νᾶνος, meaning dwarf and denotes a factor of 10^-9, that applyied the longitude units is equal to a billionth of a meter. Now we know that this science allows us to work with molecular structures and their atoms, obtaining material that exhibit physical, chemical and biological phenomena very different to those manifesting in materials with a bigger longitude. As an example in medicine, the nanometric compounds and the materials in nano structures are often offered with more effectiveness regarding to the traditional chemical formulas. This is due to the fact that many occasions combining these old compounds with the new ones, creates new therapies and even replaced them, reveling new diagnostic and therapeutic properties. Even though the complexity of the information at nano level is greater than that in conventional biologic level and, thus, any work flow in nano medicine requires, in an inherent way, advance information management strategies. Many researchers in nanotechnology are looking for a way to obtain information about these nanometric materials to improve their studies that leads in many occasions to prove these methods or to create a new compound that helps modern medicine against powerful diseases, such as cancer. But in these days it is difficult to find a tool that searches and provides a specific information in the thousands of clinic essays that are uploaded daily on the web. Currently, the bio medic informatics tries to provide the work frame that will allow to deal with these information challenge in nano level. In this context, the new area of nano informatics pretends to detect and establish the existing links between medicine, nanotechnology and informatics, encouraging the usage of computational methods to resolve questions and problems that surge with the wide information intersection that is between biomedicine and nanotechnology. Another present case, is that many biomedicine researchers want to know and be able to compare the information inside those clinic essays that contains subjects of nanotechnology on the different webpages across the world, obtaining the clinic essays that has been done in North America and the essays done in Europe, and thus knowing if in this time, this field is really being exploited in both continents. In this master thesis, the author will use an enhanced text pre-processor with an algorithm that was defined as the best text processor in a doctoral thesis, that included many of these tests to obtain a close estimation that helps to differentiate when a clinic essay contains information about nanotechnology and when it does not. In other words, applying an analysis to the scientific literature and clinic essay available in both continents, in order to extract relevant information about experiments and the results in nano-medicine (textual patterns, common vocabulary, experiments descriptors, characterization parameters, etc.), followed by the mechanism process to structure and analyze said information automatically. This analysis concludes with the estimation, mentioned before, needed to compare the quantity of studies about nanotechnology in these two continents. Obviously we use a data reference model (Gold standard) – a set of training data manually annotated –, and the set of data for the test conforms the entire database of these clinic essay registers, allowing to distinguish automatically the studies centered on nano drugs, nano devices and nano methods of those focus on testing traditional pharmaceutical products.
Resumo:
La tesis que se presenta tiene como propósito la construcción automática de ontologías a partir de textos, enmarcándose en el área denominada Ontology Learning. Esta disciplina tiene como objetivo automatizar la elaboración de modelos de dominio a partir de fuentes información estructurada o no estructurada, y tuvo su origen con el comienzo del milenio, a raíz del crecimiento exponencial del volumen de información accesible en Internet. Debido a que la mayoría de información se presenta en la web en forma de texto, el aprendizaje automático de ontologías se ha centrado en el análisis de este tipo de fuente, nutriéndose a lo largo de los años de técnicas muy diversas provenientes de áreas como la Recuperación de Información, Extracción de Información, Sumarización y, en general, de áreas relacionadas con el procesamiento del lenguaje natural. La principal contribución de esta tesis consiste en que, a diferencia de la mayoría de las técnicas actuales, el método que se propone no analiza la estructura sintáctica superficial del lenguaje, sino que estudia su nivel semántico profundo. Su objetivo, por tanto, es tratar de deducir el modelo del dominio a partir de la forma con la que se articulan los significados de las oraciones en lenguaje natural. Debido a que el nivel semántico profundo es independiente de la lengua, el método permitirá operar en escenarios multilingües, en los que es necesario combinar información proveniente de textos en diferentes idiomas. Para acceder a este nivel del lenguaje, el método utiliza el modelo de las interlinguas. Estos formalismos, provenientes del área de la traducción automática, permiten representar el significado de las oraciones de forma independiente de la lengua. Se utilizará en concreto UNL (Universal Networking Language), considerado como la única interlingua de propósito general que está normalizada. La aproximación utilizada en esta tesis supone la continuación de trabajos previos realizados tanto por su autor como por el equipo de investigación del que forma parte, en los que se estudió cómo utilizar el modelo de las interlinguas en las áreas de extracción y recuperación de información multilingüe. Básicamente, el procedimiento definido en el método trata de identificar, en la representación UNL de los textos, ciertas regularidades que permiten deducir las piezas de la ontología del dominio. Debido a que UNL es un formalismo basado en redes semánticas, estas regularidades se presentan en forma de grafos, generalizándose en estructuras denominadas patrones lingüísticos. Por otra parte, UNL aún conserva ciertos mecanismos de cohesión del discurso procedentes de los lenguajes naturales, como el fenómeno de la anáfora. Con el fin de aumentar la efectividad en la comprensión de las expresiones, el método provee, como otra contribución relevante, la definición de un algoritmo para la resolución de la anáfora pronominal circunscrita al modelo de la interlingua, limitada al caso de pronombres personales de tercera persona cuando su antecedente es un nombre propio. El método propuesto se sustenta en la definición de un marco formal, que ha debido elaborarse adaptando ciertas definiciones provenientes de la teoría de grafos e incorporando otras nuevas, con el objetivo de ubicar las nociones de expresión UNL, patrón lingüístico y las operaciones de encaje de patrones, que son la base de los procesos del método. Tanto el marco formal como todos los procesos que define el método se han implementado con el fin de realizar la experimentación, aplicándose sobre un artículo de la colección EOLSS “Encyclopedia of Life Support Systems” de la UNESCO. ABSTRACT The purpose of this thesis is the automatic construction of ontologies from texts. This thesis is set within the area of Ontology Learning. This discipline aims to automatize domain models from structured or unstructured information sources, and had its origin with the beginning of the millennium, as a result of the exponential growth in the volume of information accessible on the Internet. Since most information is presented on the web in the form of text, the automatic ontology learning is focused on the analysis of this type of source, nourished over the years by very different techniques from areas such as Information Retrieval, Information Extraction, Summarization and, in general, by areas related to natural language processing. The main contribution of this thesis consists of, in contrast with the majority of current techniques, the fact that the method proposed does not analyze the syntactic surface structure of the language, but explores his deep semantic level. Its objective, therefore, is trying to infer the domain model from the way the meanings of the sentences are articulated in natural language. Since the deep semantic level does not depend on the language, the method will allow to operate in multilingual scenarios, where it is necessary to combine information from texts in different languages. To access to this level of the language, the method uses the interlingua model. These formalisms, coming from the area of machine translation, allow to represent the meaning of the sentences independently of the language. In this particular case, UNL (Universal Networking Language) will be used, which considered to be the only interlingua of general purpose that is standardized. The approach used in this thesis corresponds to the continuation of previous works carried out both by the author of this thesis and by the research group of which he is part, in which it is studied how to use the interlingua model in the areas of multilingual information extraction and retrieval. Basically, the procedure defined in the method tries to identify certain regularities at the UNL representation of texts that allow the deduction of the parts of the ontology of the domain. Since UNL is a formalism based on semantic networks, these regularities are presented in the form of graphs, generalizing in structures called linguistic patterns. On the other hand, UNL still preserves certain mechanisms of discourse cohesion from natural languages, such as the phenomenon of the anaphora. In order to increase the effectiveness in the understanding of expressions, the method provides, as another significant contribution, the definition of an algorithm for the resolution of pronominal anaphora limited to the model of the interlingua, in the case of third person personal pronouns when its antecedent is a proper noun. The proposed method is based on the definition of a formal framework, adapting some definitions from Graph Theory and incorporating new ones, in order to locate the notions of UNL expression and linguistic pattern, as well as the operations of pattern matching, which are the basis of the method processes. Both the formal framework and all the processes that define the method have been implemented in order to carry out the experimentation, applying on an article of the "Encyclopedia of Life Support Systems" of the UNESCO-EOLSS collection.
Resumo:
The goal of the project is to analyze, experiment, and develop intelligent, interactive and multilingual Text Mining technologies, as a key element of the next generation of search engines, systems with the capacity to find "the need behind the query". This new generation will provide specialized services and interfaces according to the search domain and type of information needed. Moreover, it will integrate textual search (websites) and multimedia search (images, audio, video), it will be able to find and organize information, rather than generating ranked lists of websites.
Resumo:
Los métodos para Extracción de Información basados en la Supervisión a Distancia se basan en usar tuplas correctas para adquirir menciones de esas tuplas, y así entrenar un sistema tradicional de extracción de información supervisado. En este artículo analizamos las fuentes de ruido en las menciones, y exploramos métodos sencillos para filtrar menciones ruidosas. Los resultados demuestran que combinando el filtrado de tuplas por frecuencia, la información mutua y la eliminación de menciones lejos de los centroides de sus respectivas etiquetas mejora los resultados de dos modelos de extracción de información significativamente.
Resumo:
Currently there are an overwhelming number of scientific publications in Life Sciences, especially in Genetics and Biotechnology. This huge amount of information is structured in corporate Data Warehouses (DW) or in Biological Databases (e.g. UniProt, RCSB Protein Data Bank, CEREALAB or GenBank), whose main drawback is its cost of updating that makes it obsolete easily. However, these Databases are the main tool for enterprises when they want to update their internal information, for example when a plant breeder enterprise needs to enrich its genetic information (internal structured Database) with recently discovered genes related to specific phenotypic traits (external unstructured data) in order to choose the desired parentals for breeding programs. In this paper, we propose to complement the internal information with external data from the Web using Question Answering (QA) techniques. We go a step further by providing a complete framework for integrating unstructured and structured information by combining traditional Databases and DW architectures with QA systems. The great advantage of our framework is that decision makers can compare instantaneously internal data with external data from competitors, thereby allowing taking quick strategic decisions based on richer data.
Resumo:
Presentamos una herramienta basada en coocurrencias de fármaco-efecto para la detección de reacciones adversas e indicaciones en comentarios de usuarios procedentes de un foro médico en español. Además, se describe la construcción automática de la primera base de datos en español sobre indicaciones y efectos adversos de fármacos.