926 resultados para automatic affect analysis
Resumo:
We present new tools for the segmentation and analysis of musical scores in the OpenMusic computer-aided composition environment. A modular object-oriented framework enables the creation of segmentations on score objects and the implementation of automatic or semi-automatic analysis processes. The analyses can be performed and displayed thanks to customizable classes and callbacks. Concrete examples are given, in particular with the implementation of a semi-automatic harmonic analysis system and a framework for rhythmic transcription.
Resumo:
Image processing offers unparalleled potential for traffic monitoring and control. For many years engineers have attempted to perfect the art of automatic data abstraction from sequences of video images. This paper outlines a research project undertaken at Napier University by the authors in the field of image processing for automatic traffic analysis. A software based system implementing TRIP algorithms to count cars and measure vehicle speed has been developed by members of the Transport Engineering Research Unit (TERU) at the University. The TRIP algorithm has been ported and evaluated on an IBM PC platform with a view to hardware implementation of the pre-processing routines required for vehicle detection. Results show that a software based traffic counting system is realisable for single window processing. Due to the high volume of data required to be processed for full frames or multiple lanes, system operations in real time are limited. Therefore specific hardware is required to be designed. The paper outlines a hardware design for implementation of inter-frame and background differencing, background updating and shadow removal techniques. Preliminary results showing the processing time and counting accuracy for the routines implemented in software are presented and a real time hardware pre-processing architecture is described.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
This paper describes the main features and present results of MPRO-Spanish, a parser for morphological and syntactic analysis of unrestricted Spanish text developed at the IAI1. This parser makes direct use of X-phrase structure rules to handle a variety of patterns from derivational morphology and syntactic structure. Both analyses, morphological and syntactic, are realised by two subsequent modules. One module analyses and disambiguates the source words at morphological level while the other consists of a series of programs and a deterministic, procedural and explicit grammar. The article explains the main features of MPRO and resumes some of the experiments on some of its applications, some of which still being implemented like the monolingual and bilingual term extraction while others need further work like indexing. The results and applications obtained so far with simple and relatively complex sentences give us grounds to believe in its reliability.
Resumo:
Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.
Resumo:
The objective of this study was to examine the relationship between the expression of B cell activating factor (BAFF) and BAFF receptor in patients with disease activity of systemic lupus erythematosus (SLE). Real-time RT-PCR was used to examine BAFF mRNA expression in peripheral blood monocytes of active and stable SLE patients and healthy controls. The percentage of BAFF receptor 3 (BR3) on B lymphocytes was measured by flow cytometry. Soluble BAFF levels in serum were assayed by ELISA. Microalbumin levels were assayed by an automatic immune analysis machine. BAFF mRNA and soluble BAFF levels were highest in the active SLE group, followed by the stable SLE group, and controls (P<0.01). The percentage of BR3 on B lymphocytes was downregulated in the active SLE group compared with the stable SLE group and controls (P<0.01). BAFF mRNA levels and soluble BAFF levels were higher in patients who were positive for proteinuria than in those who were negative (P<0.01). The percentage of BR3 on B lymphocytes was lower in patients who were positive for proteinuria than in those who were negative (P<0.01). The BAFF/BR3 axis may be over-activated in SLE patients. BAFF and BR3 levels may be useful parameters for evaluating treatment.
Resumo:
The subject of the thesis is automatic sentence compression with machine learning, so that the compressed sentences remain both grammatical and retain their essential meaning. There are multiple possible uses for the compression of natural language sentences. In this thesis the focus is generation of television program subtitles, which often are compressed version of the original script of the program. The main part of the thesis consists of machine learning experiments for automatic sentence compression using different approaches to the problem. The machine learning methods used for this work are linear-chain conditional random fields and support vector machines. Also we take a look which automatic text analysis methods provide useful features for the task. The data used for machine learning is supplied by Lingsoft Inc. and consists of subtitles in both compressed an uncompressed form. The models are compared to a baseline system and comparisons are made both automatically and also using human evaluation, because of the potentially subjective nature of the output. The best result is achieved using a CRF - sequence classification using a rich feature set. All text analysis methods help classification and most useful method is morphological analysis. Tutkielman aihe on suomenkielisten lauseiden automaattinen tiivistäminen koneellisesti, niin että lyhennetyt lauseet säilyttävät olennaisen informaationsa ja pysyvät kieliopillisina. Luonnollisen kielen lauseiden tiivistämiselle on monta käyttötarkoitusta, mutta tässä tutkielmassa aihetta lähestytään television ohjelmien tekstittämisen kautta, johon käytännössä kuuluu alkuperäisen tekstin lyhentäminen televisioruudulle paremmin sopivaksi. Tutkielmassa kokeillaan erilaisia koneoppimismenetelmiä tekstin automaatiseen lyhentämiseen ja tarkastellaan miten hyvin erilaiset luonnollisen kielen analyysimenetelmät tuottavat informaatiota, joka auttaa näitä menetelmiä lyhentämään lauseita. Lisäksi tarkastellaan minkälainen lähestymistapa tuottaa parhaan lopputuloksen. Käytetyt koneoppimismenetelmät ovat tukivektorikone ja lineaarisen sekvenssin mallinen CRF. Koneoppimisen tukena käytetään tekstityksiä niiden eri käsittelyvaiheissa, jotka on saatu Lingsoft OY:ltä. Luotuja malleja vertaillaan Lopulta mallien lopputuloksia evaluoidaan automaattisesti ja koska teksti lopputuksena on jossain määrin subjektiivinen myös ihmisarviointiin perustuen. Vertailukohtana toimii kirjallisuudesta poimittu menetelmä. Tutkielman tuloksena paras lopputulos saadaan aikaan käyttäen CRF sekvenssi-luokittelijaa laajalla piirrejoukolla. Kaikki kokeillut teksin analyysimenetelmät auttavat luokittelussa, joista tärkeimmän panoksen antaa morfologinen analyysi.
Resumo:
The ability to detect faces in images is of critical ecological significance. It is a pre-requisite for other important face perception tasks such as person identification, gender classification and affect analysis. Here we address the question of how the visual system classifies images into face and non-face patterns. We focus on face detection in impoverished images, which allow us to explore information thresholds required for different levels of performance. Our experimental results provide lower bounds on image resolution needed for reliable discrimination between face and non-face patterns and help characterize the nature of facial representations used by the visual system under degraded viewing conditions. Specifically, they enable an evaluation of the contribution of luminance contrast, image orientation and local context on face-detection performance.
Resumo:
Intestinal parasitosis constitutes a serious health problem in most tropical countries. The diagnosis of enteroparasites in laboratory routine relies on the examination of stool samples using optical microscopy and the error rates usually range from moderate to high. Approaches based on automatic image analysis have been proposed, but the methods are usually specific for some species, some of them are computationally expensive, and image acquisition and focus are manual. We present a solution to automate the diagnosis of the 15 most common species of enteroparasites in Brazil, using a sensitive parasitological technique, a motorized microscope with digital camera for automatic image acquisition and focus, and fast image analysis methods. The results indicate that our solution is effective and suitable for laboratory routine, in which the exam must be concluded in a few minutes. © 2013 IEEE.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
[EN]Automatic facial analysis abilities are commonly integrated in a system by a previous off-line learning stage. In this paper we argue that a facial analysis system would improve its facial analysis capabilities based on its own experience similarly to the way a biological system, i.e. the human system, does throughout the years. The approach described, focused on gender classification, updates its knowledge according to the classification results. The presented gender experiments suggestthatthisapproachispromising,evenwhenjustashort simulationofwhatforhumanswouldtakeyearsofacquisition experience was performed.
Resumo:
WE INVESTIGATED HOW WELL STRUCTURAL FEATURES such as note density or the relative number of changes in the melodic contour could predict success in implicit and explicit memory for unfamiliar melodies. We also analyzed which features are more likely to elicit increasingly confident judgments of "old" in a recognition memory task. An automated analysis program computed structural aspects of melodies, both independent of any context, and also with reference to the other melodies in the testset and the parent corpus of pop music. A few features predicted success in both memory tasks, which points to a shared memory component. However, motivic complexity compared to a large corpus of pop music had different effects on explicit and implicit memory. We also found that just a few features are associated with different rates of "old" judgments, whether the items were old or new. Rarer motives relative to the testset predicted hits and rarer motives relative to the corpus predicted false alarms. This data-driven analysis provides further support for both shared and separable mechanisms in implicit and explicit memory retrieval, as well as the role of distinctiveness in true and false judgments of familiarity.
Resumo:
Internetbasierte Jobportale liefern in Form von Stellenanzeigen eine interessante Datengrundlage, um Qualifikationsanforderungen von nachfragenden Unternehmen an potenzielle Hochschulabsolventen transparent zu machen. Hochschulen können durch Analyse dieser Qualifikationsanforderungen das eigene Aus- und Weiterbildungsangebot arbeitsmarktorientiert weiterentwickeln und sich somit in der Hochschullandschaft profilieren. Hierfür ist es indes erforderlich, die Stellenanzeigen aus Jobportalen zu extrahieren und mithilfe adäquater analytischer Informationssysteme weiter zu verarbeiten. In diesem Beitrag zum CampusSource White Paper Award wird ein Konzept für Job Intelligence-Services vorgestellt, die die systematische Analyse von Qualifikationsanforderungen auf Grundlage von Stellenanzeigen aus Jobportalen gestatten.
Resumo:
Objective: The aim of this article is to propose an integrated framework for extracting and describing patterns of disorders from medical images using a combination of linear discriminant analysis and active contour models. Methods: A multivariate statistical methodology was first used to identify the most discriminating hyperplane separating two groups of images (from healthy controls and patients with schizophrenia) contained in the input data. After this, the present work makes explicit the differences found by the multivariate statistical method by subtracting the discriminant models of controls and patients, weighted by the pooled variance between the two groups. A variational level-set technique was used to segment clusters of these differences. We obtain a label of each anatomical change using the Talairach atlas. Results: In this work all the data was analysed simultaneously rather than assuming a priori regions of interest. As a consequence of this, by using active contour models, we were able to obtain regions of interest that were emergent from the data. The results were evaluated using, as gold standard, well-known facts about the neuroanatomical changes related to schizophrenia. Most of the items in the gold standard was covered in our result set. Conclusions: We argue that such investigation provides a suitable framework for characterising the high complexity of magnetic resonance images in schizophrenia as the results obtained indicate a high sensitivity rate with respect to the gold standard. (C) 2010 Elsevier B.V. All rights reserved.