838 resultados para Modeling Rapport Using Machine Learning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Programa Doutoral em Engenharia Eletrónica e de Computadores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Text Mining has opened a vast array of possibilities concerning automatic information retrieval from large amounts of text documents. A variety of themes and types of documents can be easily analyzed. More complex features such as those used in Forensic Linguistics can gather deeper understanding from the documents, making possible performing di cult tasks such as author identi cation. In this work we explore the capabilities of simpler Text Mining approaches to author identification of unstructured documents, in particular the ability to distinguish poetic works from two of Fernando Pessoas' heteronyms: Alvaro de Campos and Ricardo Reis. Several processing options were tested and accuracies of 97% were reached, which encourage further developments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, there has been a growing interest in the field of metabolomics, materialized by a remarkable growth in experimental techniques, available data and related biological applications. Indeed, techniques as Nuclear Magnetic Resonance, Gas or Liquid Chromatography, Mass Spectrometry, Infrared and UV-visible spectroscopies have provided extensive datasets that can help in tasks as biological and biomedical discovery, biotechnology and drug development. However, as it happens with other omics data, the analysis of metabolomics datasets provides multiple challenges, both in terms of methodologies and in the development of appropriate computational tools. Indeed, from the available software tools, none addresses the multiplicity of existing techniques and data analysis tasks. In this work, we make available a novel R package, named specmine, which provides a set of methods for metabolomics data analysis, including data loading in different formats, pre-processing, metabolite identification, univariate and multivariate data analysis, machine learning, and feature selection. Importantly, the implemented methods provide adequate support for the analysis of data from diverse experimental techniques, integrating a large set of functions from several R packages in a powerful, yet simple to use environment. The package, already available in CRAN, is accompanied by a web site where users can deposit datasets, scripts and analysis reports to be shared with the community, promoting the efficient sharing of metabolomics data analysis pipelines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relatório de estágio de mestrado em Ensino de Matemática no 3.º ciclo do Ensino Básico e no Ensino Secundário

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The decision support models in intensive care units are developed to support medical staff in their decision making process. However, the optimization of these models is particularly difficult to apply due to dynamic, complex and multidisciplinary nature. Thus, there is a constant research and development of new algorithms capable of extracting knowledge from large volumes of data, in order to obtain better predictive results than the current algorithms. To test the optimization techniques a case study with real data provided by INTCare project was explored. This data is concerning to extubation cases. In this dataset, several models like Evolutionary Fuzzy Rule Learning, Lazy Learning, Decision Trees and many others were analysed in order to detect early extubation. The hydrids Decision Trees Genetic Algorithm, Supervised Classifier System and KNNAdaptive obtained the most accurate rate 93.2%, 93.1%, 92.97% respectively, thus showing their feasibility to work in a real environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kernel-Functions, Machine Learning, Least Squares, Speech Recognition, Classification, Regression

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’objectiu d’aquest projecte que consisteix a elaborar un algoritme d’optimització que permeti, mitjançant un ajust de dades per mínims quadrats, la extracció dels paràmetres del circuit equivalent que composen el model teòric d’un ressonador FBAR, a partir de les mesures dels paràmetres S. Per a dur a terme aquest treball, es desenvolupa en primer lloc tota la teoria necessària de ressonadors FBAR. Començant pel funcionament i l’estructura, i mostrant especial interès en el modelat d’aquests ressonadors mitjançant els models de Mason, Butterworth Van-Dyke i BVD Modificat. En segon terme, s’estudia la teoria sobre optimització i programació No-Lineal. Un cop s’ha exposat la teoria, es procedeix a la descripció de l’algoritme implementat. Aquest algoritme utilitza una estratègia de múltiples passos que agilitzen l'extracció dels paràmetres del ressonador.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Informe de investigación realizado a partir de una estancia en el Équipe de Recherche en Syntaxe et Sémantique de la Université de Toulouse-Le Mirail, Francia, entre julio y setiembre de 2006. En la actualidad existen diversos diccionarios de siglas en línea. Entre ellos sobresalen Acronym Finder, Abbreviations.com y Acronyma; todos ellos dedicados mayoritariamente a las siglas inglesas. Al igual que los diccionarios en papel, este tipo de diccionarios presenta problemas de desactualización por la gran cantidad de siglas que se crean a diario. Por ejemplo, en 2001, un estudio de Pustejovsky et al. mostraba que en los abstracts de Medline aparecían mensualmente cerca de 12.000 nuevas siglas. El mecanismo de actualización empleado por estos recursos es la remisión de nuevas siglas por parte de los usuarios. Sin embargo, esta técnica tiene la desventaja de que la edición de la información es muy lenta y costosa. Un ejemplo de ello es el caso de Abbreviations.com que en octubre de 2006 tenía alrededor de 100.000 siglas pendientes de edición e incorporación definitiva. Como solución a este tipo de problema, se plantea el diseño de sistemas de detección y extracción automática de siglas a partir de corpus. El proceso de detección comporta dos pasos; el primero, consiste en la identificación de las siglas dentro de un corpus y, el segundo, la desambiguación, es decir, la selección de la forma desarrollada apropiada de una sigla en un contexto dado. En la actualidad, los sistemas de detección de siglas emplean métodos basados en patrones, estadística, aprendizaje máquina, o combinaciones de ellos. En este estudio se analizan los principales sistemas de detección y desambiguación de siglas y los métodos que emplean. Cada uno se evalúa desde el punto de vista del rendimiento, medido en términos de precisión (porcentaje de siglas correctas con respecto al número total de siglas extraídas por el sistema) y exhaustividad (porcentaje de siglas correctas identificadas por el sistema con respecto al número total de siglas existente en el corpus). Como resultado, se presentan los criterios para el diseño de un futuro sistema de detección de siglas en español.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the Ricardian Equivalence proposition when expectations are not rational and are instead formed using adaptive learning rules. We show that Ricardian Equivalence continues to hold provided suitable additional conditions on learning dynamics are satisfied. However, new cases of failure can also emerge under learning. In particular, for Ricardian Equivalence to obtain, agents’ expectations must not depend on government’s financial variables under deficit financing.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu del projecte QUALITEL dut a terme durant el curs 2007/2008 fou millorar l'aprenentatge dels estudiants de primer curs (fase selectiva) d'enginyeria de telecomunicació de l'ETSETB, introduint una nova metodologia docent basada en l'EEES. Es a dir, programació del treball de l'estudiant dins i fora de l'aula, increment de la interacció entre el professors i els seus estudiants mitjançant un sistema de tutories i la utilització d'eines de suport a l'aprenentatge com la utilització del Campus Digital ATENEA. La millora de l'aprenentatge s'hauria de traduir en un augment significatiu del percentatge d'aprovats a la fase selectiva que permetés capgirar la dinàmica de l'ETSETB en els darrers anys que ha sofert una davallada molt important d'estudiants matriculats. Aquesta davallada es va originar per una disminució de la demanda dels nostres estudis, la qual va provocar una baixada de la nota de tall que, a la seva vegada, va comportar un elevat fracàs a la fase selectiva.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.