911 resultados para Machine Learning,Natural Language Processing,Descriptive Text Mining,POIROT,Transformer
Resumo:
v. 25 (1914)
Resumo:
n.s. no.29(1986)
Resumo:
v.69:no.2(1976)
Resumo:
Informe de investigación realizado a partir de una estancia en el Équipe de Recherche en Syntaxe et Sémantique de la Université de Toulouse-Le Mirail, Francia, entre julio y setiembre de 2006. En la actualidad existen diversos diccionarios de siglas en línea. Entre ellos sobresalen Acronym Finder, Abbreviations.com y Acronyma; todos ellos dedicados mayoritariamente a las siglas inglesas. Al igual que los diccionarios en papel, este tipo de diccionarios presenta problemas de desactualización por la gran cantidad de siglas que se crean a diario. Por ejemplo, en 2001, un estudio de Pustejovsky et al. mostraba que en los abstracts de Medline aparecían mensualmente cerca de 12.000 nuevas siglas. El mecanismo de actualización empleado por estos recursos es la remisión de nuevas siglas por parte de los usuarios. Sin embargo, esta técnica tiene la desventaja de que la edición de la información es muy lenta y costosa. Un ejemplo de ello es el caso de Abbreviations.com que en octubre de 2006 tenía alrededor de 100.000 siglas pendientes de edición e incorporación definitiva. Como solución a este tipo de problema, se plantea el diseño de sistemas de detección y extracción automática de siglas a partir de corpus. El proceso de detección comporta dos pasos; el primero, consiste en la identificación de las siglas dentro de un corpus y, el segundo, la desambiguación, es decir, la selección de la forma desarrollada apropiada de una sigla en un contexto dado. En la actualidad, los sistemas de detección de siglas emplean métodos basados en patrones, estadística, aprendizaje máquina, o combinaciones de ellos. En este estudio se analizan los principales sistemas de detección y desambiguación de siglas y los métodos que emplean. Cada uno se evalúa desde el punto de vista del rendimiento, medido en términos de precisión (porcentaje de siglas correctas con respecto al número total de siglas extraídas por el sistema) y exhaustividad (porcentaje de siglas correctas identificadas por el sistema con respecto al número total de siglas existente en el corpus). Como resultado, se presentan los criterios para el diseño de un futuro sistema de detección de siglas en español.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Aquest projecte tracta la implementació d’una eina gràfica multiplataforma de creació i edició de gramàtiques electròniques per representar el Llenguatge Natural. És una eina per lingüistes i projectes com Spanish FrameNet Project amb la quan poden representar fàcilment transductors en un format més visual, les transicions es representen en forma de “caixes”, i guardar els resultats. S’han implementat varies opcions per crear una eina còmode i personalitzable per l’usuari amb funcionalitats enfocades a les seves necessitats com importar/exportar autòmats des d’una Expressió Regular. Es tracta l’implementació de tots els components que s’han necessitat per crear la GUI així com la seva funcionalitat.
Resumo:
Customer Experience Management (CEM) se ha convertido en un factor clave para el éxito de las empresas. CEM gestiona todas las experiencias que un cliente tiene con un proveedor de servicios o productos. Es muy importante saber como se siente un cliente en cada contacto y entonces poder sugerir automáticamente la próxima tarea a realizar, simplificando tareas realizadas por personas. En este proyecto se desarrolla una solución para evaluar experiencias. Primero se crean servicios web que clasifican experiencias en estados emocionales dependiendo del nivel de satisfacción, interés, … Esto es realizado a través de minería de textos. Se procesa y clasifica información no estructurada (documentos de texto) que representan o describen las experiencias. Se utilizan métodos de aprendizaje supervisado. Esta parte es desarrollada con una arquitectura orientada a servicios (SOA) para asegurar el uso de estándares y que los servicios sean accesibles por cualquier aplicación. Estos servicios son desplegados en un servidor de aplicaciones. En la segunda parte se desarrolla dos aplicaciones basadas en casos reales. En esta fase Cloud computing es clave. Se utiliza una plataforma de desarrollo en línea para crear toda la aplicación incluyendo tablas, objetos, lógica de negocio e interfaces de usuario. Finalmente los servicios de clasificación son integrados a la plataforma asegurando que las experiencias son evaluadas y que las tareas de seguimiento son automáticamente creadas.
Resumo:
Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior
Resumo:
Language is typically a function of the left hemisphere but the right hemisphere is also essential in some healthy individuals and patients. This inter-subject variability necessitates the localization of language function, at the individual level, prior to neurosurgical intervention. Such assessments are typically made by comparing left and right hemisphere language function to determine "language lateralization" using clinical tests or fMRI. Here, we show that language function needs to be assessed at the region and hemisphere specific level, because laterality measures can be misleading. Using fMRI data from 82 healthy participants, we investigated the degree to which activation for a semantic word matching task was lateralized in 50 different brain regions and across the entire cortex. This revealed two novel findings. First, the degree to which language is lateralized across brain regions and between subjects was primarily driven by differences in right hemisphere activation rather than differences in left hemisphere activation. Second, we found that healthy subjects who have relatively high left lateralization in the angular gyrus also have relatively low left lateralization in the ventral precentral gyrus. These findings illustrate spatial heterogeneity in language lateralization that is lost when global laterality measures are considered. It is likely that the complex spatial variability we observed in healthy controls is more exaggerated in patients with brain damage. We therefore highlight the importance of investigating within hemisphere regional variations in fMRI activation, prior to neuro-surgical intervention, to determine how each hemisphere and each region contributes to language processing. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc.
Resumo:
Among various advantages, their small size makes model organisms preferred subjects of investigation. Yet, even in model systems detailed analysis of numerous developmental processes at cellular level is severely hampered by their scale. For instance, secondary growth of Arabidopsis hypocotyls creates a radial pattern of highly specialized tissues that comprises several thousand cells starting from a few dozen. This dynamic process is difficult to follow because of its scale and because it can only be investigated invasively, precluding comprehensive understanding of the cell proliferation, differentiation, and patterning events involved. To overcome such limitation, we established an automated quantitative histology approach. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with automated cell type recognition through machine learning, we could establish a cellular resolution atlas that reveals vascular morphodynamics during secondary growth, for example equidistant phloem pole formation. DOI: http://dx.doi.org/10.7554/eLife.01567.001.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.