786 resultados para Modeling Non-Verbal Behaviors Using Machine Learning
Resumo:
Pós-graduação em Enfermagem (mestrado profissional) - FMB
Resumo:
Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.
Resumo:
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
Nel lavoro di tesi qui presentato si indaga l'applicazione di tecniche di apprendimento mirate ad una più efficiente esecuzione di un portfolio di risolutore di vincoli (constraint solver). Un constraint solver è un programma che dato in input un problema di vincoli, elabora una soluzione mediante l'utilizzo di svariate tecniche. I problemi di vincoli sono altamente presenti nella vita reale. Esempi come l'organizzazione dei viaggi dei treni oppure la programmazione degli equipaggi di una compagnia aerea, sono tutti problemi di vincoli. Un problema di vincoli è formalizzato da un problema di soddisfacimento di vincoli(CSP). Un CSP è descritto da un insieme di variabili che possono assumere valori appartenenti ad uno specico dominio ed un insieme di vincoli che mettono in relazione variabili e valori assumibili da esse. Una tecnica per ottimizzare la risoluzione di tali problemi è quella suggerita da un approccio a portfolio. Tale tecnica, usata anche in am- biti come quelli economici, prevede la combinazione di più solver i quali assieme possono generare risultati migliori di un approccio a singolo solver. In questo lavoro ci preoccupiamo di creare una nuova tecnica che combina un portfolio di constraint solver con tecniche di machine learning. Il machine learning è un campo di intelligenza articiale che si pone l'obiettivo di immettere nelle macchine una sorta di `intelligenza'. Un esempio applicativo potrebbe essere quello di valutare i casi passati di un problema ed usarli in futuro per fare scelte. Tale processo è riscontrato anche a livello cognitivo umano. Nello specico, vogliamo ragionare in termini di classicazione. Una classicazione corrisponde ad assegnare ad un insieme di caratteristiche in input, un valore discreto in output, come vero o falso se una mail è classicata come spam o meno. La fase di apprendimento sarà svolta utilizzando una parte di CPHydra, un portfolio di constraint solver sviluppato presso la University College of Cork (UCC). Di tale algoritmo a portfolio verranno utilizzate solamente le caratteristiche usate per descrivere determinati aspetti di un CSP rispetto ad un altro; queste caratteristiche vengono altresì dette features. Creeremo quindi una serie di classicatori basati sullo specifico comportamento dei solver. La combinazione di tali classicatori con l'approccio a portfolio sara nalizzata allo scopo di valutare che le feature di CPHydra siano buone e che i classicatori basati su tali feature siano affidabili. Per giusticare il primo risultato, eettueremo un confronto con uno dei migliori portfolio allo stato dell'arte, SATzilla. Una volta stabilita la bontà delle features utilizzate per le classicazioni, andremo a risolvere i problemi simulando uno scheduler. Tali simulazioni testeranno diverse regole costruite con classicatori precedentemente introdotti. Prima agiremo su uno scenario ad un processore e successivamente ci espanderemo ad uno scenario multi processore. In questi esperimenti andremo a vericare che, le prestazioni ottenute tramite l'applicazione delle regole create appositamente sui classicatori, abbiano risultati migliori rispetto ad un'esecuzione limitata all'utilizzo del migliore solver del portfolio. I lavoro di tesi è stato svolto in collaborazione con il centro di ricerca 4C presso University College Cork. Su questo lavoro è stato elaborato e sottomesso un articolo scientico alla International Joint Conference of Articial Intelligence (IJCAI) 2011. Al momento della consegna della tesi non siamo ancora stati informati dell'accettazione di tale articolo. Comunque, le risposte dei revisori hanno indicato che tale metodo presentato risulta interessante.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Resumo:
Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.
Resumo:
AIM: To test in vitro the mechanical resistance, rotational misfit and failure mode of three original implant-abutment connections and to compare them to two connections between non-original abutments connected to one of the original implants. MATERIAL AND METHODS: Three different implants with small diameters (3.3 mm for Straumann Roxolid, 3.5 mm for Nobel Biocare Replace and Astra Tech Osseospeed TX) were connected with individualized titanium abutments. Twelve implants from each system were connected to their original abutments (Straumann CARES, Nobel Biocare Procera, Astra Tech Atlantis). Twenty-four Roxolid implants were connected with non-original abutments using CAD/CAM procedures from the other two manufacturers (12 Nobel Biocare Procera and 12 Astra Tech Atlantis). For the critical bending test, a Zwick/Roell 1475 machine and the Xpert Zwick/Roell software were used. RESULTS: The rotational misfit varied when comparing the different interfaces. The use of non-original grade V titanium abutments on Roxolid implants increased the force needed for deformation. The fracture mode was different with one of the original connections. CONCLUSIONS: Non-original abutments differ in design of the connecting surfaces and material and demonstrate higher rotational misfit. These differences may result in unexpected failure modes.
Resumo:
The advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene functions but they also present challenge of analyzing data with large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. In this paper, we address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression based on a previous approach, Iteratively ReWeighted Partial Least Squares, i.e. IRWPLS (Marx, 1996). We compare our results with two-stage PLS (Nguyen and Rocke, 2002A; Nguyen and Rocke, 2002B) and other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying bias correction to the likelihood to avoid (quasi)separation, we often get lower classification error rates.
Resumo:
A diesel oxidation catalyst (DOC) with a catalyzed diesel particulate filter (CPF) is an effective exhaust aftertreatment device that reduces particulate emissions from diesel engines, and properly designed DOC-CPF systems provide passive regeneration of the filter by the oxidation of PM via thermal and NO2/temperature-assisted means under various vehicle duty cycles. However, controlling the backpressure on engines caused by the addition of the CPF to the exhaust system requires a good understanding of the filtration and oxidation processes taking place inside the filter as the deposition and oxidation of solid particulate matter (PM) change as functions of loading time. In order to understand the solid PM loading characteristics in the CPF, an experimental and modeling study was conducted using emissions data measured from the exhaust of a John Deere 6.8 liter, turbocharged and after-cooled engine with a low-pressure loop EGR system and a DOC-CPF system (or a CCRT® - Catalyzed Continuously Regenerating Trap®, as named by Johnson Matthey) in the exhaust system. A series of experiments were conducted to evaluate the performance of the DOC-only, CPF-only and DOC-CPF configurations at two engine speeds (2200 and 1650 rpm) and various loads on the engine ranging from 5 to 100% of maximum torque at both speeds. Pressure drop across the DOC and CPF, mass deposited in the CPF at the end of loading, upstream and downstream gaseous and particulate emissions, and particle size distributions were measured at different times during the experiments to characterize the pressure drop and filtration efficiency of the DOCCPF system as functions of loading time. Pressure drop characteristics measured experimentally across the DOC-CPF system showed a distinct deep-bed filtration region characterized by a non-linear pressure drop rise, followed by a transition region, and then by a cake-filtration region with steadily increasing pressure drop with loading time at engine load cases with CPF inlet temperatures less than 325 °C. At the engine load cases with CPF inlet temperatures greater than 360 °C, the deep-bed filtration region had a steep rise in pressure drop followed by a decrease in pressure drop (due to wall PM oxidation) in the cake filtration region. Filtration efficiencies observed during PM cake filtration were greater than 90% in all engine load cases. Two computer models, i.e., the MTU 1-D DOC model and the MTU 1-D 2-layer CPF model were developed and/or improved from existing models as part of this research and calibrated using the data obtained from these experiments. The 1-D DOC model employs a three-way catalytic reaction scheme for CO, HC and NO oxidation, and is used to predict CO, HC, NO and NO2 concentrations downstream of the DOC. Calibration results from the 1-D DOC model to experimental data at 2200 and 1650 rpm are presented. The 1-D 2-layer CPF model uses a ‘2-filters in series approach’ for filtration, PM deposition and oxidation in the PM cake and substrate wall via thermal (O2) and NO2/temperature-assisted mechanisms, and production of NO2 as the exhaust gas mixture passes through the CPF catalyst washcoat. Calibration results from the 1-D 2-layer CPF model to experimental data at 2200 rpm are presented. Comparisons of filtration and oxidation behavior of the CPF at sample load-cases in both configurations are also presented. The input parameters and selected results are also compared with a similar research work with an earlier version of the CCRT®, to compare and explain differences in the fundamental behavior of the CCRT® used in these two research studies. An analysis of the results from the calibrated CPF model suggests that pressure drop across the CPF depends mainly on PM loading and oxidation in the substrate wall, and also that the substrate wall initiates PM filtration and helps in forming a PM cake layer on the wall. After formation of the PM cake layer of about 1-2 µm on the wall, the PM cake becomes the primary filter and performs 98-99% of PM filtration. In all load cases, most of PM mass deposited was in the PM cake layer, and PM oxidation in the PM cake layer accounted for 95-99% of total PM mass oxidized during loading. Overall PM oxidation efficiency of the DOC-CPF device increased with increasing CPF inlet temperatures and NO2 flow rates, and was higher in the CCRT® configuration compared to the CPF-only configuration due to higher CPF inlet NO2 concentrations. Filtration efficiencies greater than 90% were observed within 90-100 minutes of loading time (starting with a clean filter) in all load cases, due to the fact that the PM cake on the substrate wall forms a very efficient filter. A good strategy for maintaining high filtration efficiency and low pressure drop of the device while performing active regeneration would be to clean the PM cake filter partially (i.e., by retaining a cake layer of 1-2 µm thickness on the substrate wall) and to completely oxidize the PM deposited in the substrate wall. The data presented support this strategy.
Resumo:
Important food crops like rice are constantly exposed to various stresses that can have devastating effect on their survival and productivity. Being sessile, these highly evolved organisms have developed elaborate molecular machineries to sense a mixture of stress signals and elicit a precise response to minimize the damage. However, recent discoveries revealed that the interplay of these stress regulatory and signaling molecules is highly complex and remains largely unknown. In this work, we conducted large scale analysis of differential gene expression using advanced computational methods to dissect regulation of stress response which is at the heart of all molecular changes leading to the observed phenotypic susceptibility. One of the most important stress conditions in terms of loss of productivity is drought. We performed genomic and proteomic analysis of epigenetic and miRNA mechanisms in regulation of drought responsive genes in rice and found subsets of genes with striking properties. Overexpressed genesets included higher number of epigenetic marks, miRNA targets and transcription factors which regulate drought tolerance. On the other hand, underexpressed genesets were poor in above features but were rich in number of metabolic genes with multiple co-expression partners contributing majorly towards drought resistance. Identification and characterization of the patterns exhibited by differentially expressed genes hold key to uncover the synergistic and antagonistic components of the cross talk between stress response mechanisms. We performed meta-analysis on drought and bacterial stresses in rice and Arabidopsis, and identified hundreds of shared genes. We found high level of conservation of gene expression between these stresses. Weighted co-expression network analysis detected two tight clusters of genes made up of master transcription factors and signaling genes showing strikingly opposite expression status. To comprehensively identify the shared stress responsive genes between multiple abiotic and biotic stresses in rice, we performed meta-analyses of microarray studies from seven different abiotic and six biotic stresses separately and found more than thirteen hundred shared stress responsive genes. Various machine learning techniques utilizing these genes classified the stresses into two major classes' namely abiotic and biotic stresses and multiple classes of individual stresses with high accuracy and identified the top genes showing distinct patterns of expression. Functional enrichment and co-expression network analysis revealed the different roles of plant hormones, transcription factors in conserved and non-conserved genesets in regulation of stress response.
Resumo:
In this paper, we propose an intelligent method, named the Novelty Detection Power Meter (NodePM), to detect novelties in electronic equipment monitored by a smart grid. Considering the entropy of each device monitored, which is calculated based on a Markov chain model, the proposed method identifies novelties through a machine learning algorithm. To this end, the NodePM is integrated into a platform for the remote monitoring of energy consumption, which consists of a wireless sensors network (WSN). It thus should be stressed that the experiments were conducted in real environments different from many related works, which are evaluated in simulated environments. In this sense, the results show that the NodePM reduces by 13.7% the power consumption of the equipment we monitored. In addition, the NodePM provides better efficiency to detect novelties when compared to an approach from the literature, surpassing it in different scenarios in all evaluations that were carried out.