898 resultados para classification accuracy
Resumo:
The HCI community is actively seeking novel methodologies to gain insight into the user’s experience during interaction with both the application and the content. We propose an emotional recognition engine capable of automatically recognizing a set of human emotional states using psychophysiological measures of the autonomous nervous system, including galvanic skin response, respiration, and heart rate. A novel pattern recognition system, based on discriminant analysis and support vector machine classifiers is trained using movies’ scenes selected to induce emotions ranging from the positive to the negative valence dimension, including happiness, anger, disgust, sadness, and fear. In this paper we introduce an emotion recognition system and evaluate its accuracy by presenting the results of an experiment conducted with three physiologic sensors.
Resumo:
Forest cover of the Maringá municipality, located in northern Parana State, was mapped in this study. Mapping was carried out by using high-resolution HRC sensor imagery and medium resolution CCD sensor imagery from the CBERS satellite. Images were georeferenced and forest vegetation patches (TOFs - trees outside forests) were classified using two methods of digital classification: reflectance-based or the digital number of each pixel, and object-oriented. The areas of each polygon were calculated, which allowed each polygon to be segregated into size classes. Thematic maps were built from the resulting polygon size classes and summary statistics generated from each size class for each area. It was found that most forest fragments in Maringá were smaller than 500 m². There was also a difference of 58.44% in the amount of vegetation between the high-resolution imagery and medium resolution imagery due to the distinct spatial resolution of the sensors. It was concluded that high-resolution geotechnology is essential to provide reliable information on urban greens and forest cover under highly human-perturbed landscapes.
Resumo:
In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.
Resumo:
ABSTRACT The objective of this work was to study the distribution of values of the coefficient of variation (CV) in the experiments of papaya crop (Carica papaya L.) by proposing ranges to guide researchers in their evaluation for different characters in the field. The data used in this study were obtained by bibliographical review in Brazilian journals, dissertations and thesis. This study considered the following characters: diameter of the stalk, insertion height of the first fruit, plant height, number of fruits per plant, fruit biomass, fruit length, equatorial diameter of the fruit, pulp thickness, fruit firmness, soluble solids and internal cavity diameter, from which, value ranges were obtained for the CV values for each character, based on the methodology proposed by Garcia, Costa and by the standard classification of Pimentel-Gomes. The results obtained in this study indicated that ranges of CV values were different among various characters, presenting a large variation, which justifies the necessity of using specific evaluation range for each character. In addition, the use of classification ranges obtained from methodology of Costa is recommended.
Resumo:
Background: Several studies link the seamless fit of implant-supported prosthesis with the accuracy of the dental impression technique obtained during acquisition. In addition, factors such as implant angulation and coping shape contribute to implant misfit. Purpose: The aim of this study was to identify the most accurate impression technique and factors affecting the impression accuracy. Material and Methods: A systematic review of peer-reviewed literature was conducted analyzing articles published between 2009 and 2013. The following search terms were used: implant impression, impression accuracy, and implant misfit.A total of 417 articles were identified; 32 were selected for review. Results: All 32 selected studies refer to in vitro studies. Fourteen articles compare open and closed impression technique, 8 advocate the open technique, and 6 report similar results. Other 14 articles evaluate splinted and non-splinted techniques; all advocating the splinted technique. Polyether material usage was reported in nine; six studies tested vinyl polysiloxane and one study used irreversible hydrocolloid. Eight studies evaluated different copings designs. Intraoral optical devices were compared in four studies. Conclusions: The most accurate results were achieved with two configurations: (1) the optical intraoral system with powder and (2) the open technique with splinted squared transfer copings, using polyether as impression material.
Resumo:
ABSTRACT Based on the assumption that earnings persistence has implications for both financial analysis and compensation contracts, the aim of this paper is to investigate the role of earnings persistence assuming that (i) more persistent earnings are likely to be a better input to valuation models and (ii) more persistent earnings are likely to serve as a proxy for long-term market and managerial orientation. The analysis is based on Brazilian listed firms from 1995 to 2013, and while we document strong support for the relevance of earnings persistence in financial analysis and valuation, we fail to document a significant relationship between earnings persistence and long-term value orientation. These results are sensitive to different specifications, and additional results suggest that firms' idiosyncratic risk (total risk) is relevant to explain the focus on short-term outcomes (short-termism) across firms. The main contribution of this paper is to offer empirical evidence for the relevance of accounting numbers in both valuation and contractual theories in an emergent market.
Resumo:
Urban regeneration is more and more a “universal issue” and a crucial factor in the new trends of urban planning. It is no longer only an area of study and research; it became part of new urban and housing policies. Urban regeneration involves complex decisions as a consequence of the multiple dimensions of the problems that include special technical requirements, safety concerns, socio-economic, environmental, aesthetic, and political impacts, among others. This multi-dimensional nature of urban regeneration projects and their large capital investments justify the development and use of state-of-the-art decision support methodologies to assist decision makers. This research focuses on the development of a multi-attribute approach for the evaluation of building conservation status in urban regeneration projects, thus supporting decision makers in their analysis of the problem and in the definition of strategies and priorities of intervention. The methods presented can be embedded into a Geographical Information System for visualization of results. A real-world case study was used to test the methodology, whose results are also presented.
Resumo:
A organização automática de mensagens de correio electrónico é um desafio actual na área da aprendizagem automática. O número excessivo de mensagens afecta cada vez mais utilizadores, especialmente os que usam o correio electrónico como ferramenta de comunicação e trabalho. Esta tese aborda o problema da organização automática de mensagens de correio electrónico propondo uma solução que tem como objectivo a etiquetagem automática de mensagens. A etiquetagem automática é feita com recurso às pastas de correio electrónico anteriormente criadas pelos utilizadores, tratando-as como etiquetas, e à sugestão de múltiplas etiquetas para cada mensagem (top-N). São estudadas várias técnicas de aprendizagem e os vários campos que compõe uma mensagem de correio electrónico são analisados de forma a determinar a sua adequação como elementos de classificação. O foco deste trabalho recai sobre os campos textuais (o assunto e o corpo das mensagens), estudando-se diferentes formas de representação, selecção de características e algoritmos de classificação. É ainda efectuada a avaliação dos campos de participantes através de algoritmos de classificação que os representam usando o modelo vectorial ou como um grafo. Os vários campos são combinados para classificação utilizando a técnica de combinação de classificadores Votação por Maioria. Os testes são efectuados com um subconjunto de mensagens de correio electrónico da Enron e um conjunto de dados privados disponibilizados pelo Institute for Systems and Technologies of Information, Control and Communication (INSTICC). Estes conjuntos são analisados de forma a perceber as características dos dados. A avaliação do sistema é realizada através da percentagem de acerto dos classificadores. Os resultados obtidos apresentam melhorias significativas em comparação com os trabalhos relacionados.
Resumo:
INTRODUCTION: The correct identification of the underlying cause of death and its precise assignment to a code from the International Classification of Diseases are important issues to achieve accurate and universally comparable mortality statistics These factors, among other ones, led to the development of computer software programs in order to automatically identify the underlying cause of death. OBJECTIVE: This work was conceived to compare the underlying causes of death processed respectively by the Automated Classification of Medical Entities (ACME) and the "Sistema de Seleção de Causa Básica de Morte" (SCB) programs. MATERIAL AND METHOD: The comparative evaluation of the underlying causes of death processed respectively by ACME and SCB systems was performed using the input data file for the ACME system that included deaths which occurred in the State of S. Paulo from June to December 1993, totalling 129,104 records of the corresponding death certificates. The differences between underlying causes selected by ACME and SCB systems verified in the month of June, when considered as SCB errors, were used to correct and improve SCB processing logic and its decision tables. RESULTS: The processing of the underlying causes of death by the ACME and SCB systems resulted in 3,278 differences, that were analysed and ascribed to lack of answer to dialogue boxes during processing, to deaths due to human immunodeficiency virus [HIV] disease for which there was no specific provision in any of the systems, to coding and/or keying errors and to actual problems. The detailed analysis of these latter disclosed that the majority of the underlying causes of death processed by the SCB system were correct and that different interpretations were given to the mortality coding rules by each system, that some particular problems could not be explained with the available documentation and that a smaller proportion of problems were identified as SCB errors. CONCLUSION: These results, disclosing a very low and insignificant number of actual problems, guarantees the use of the version of the SCB system for the Ninth Revision of the International Classification of Diseases and assures the continuity of the work which is being undertaken for the Tenth Revision version.
Resumo:
This study has a vast analysis, studying almost all the pre-electoral polls published or issued in Portugal in the month previous to each of the elections, since 1991 until the last one that took place in February 2005. The accuracy measures I used were adapted from the study carried out by Frederick Mosteller in the report to the Committee on Analysis of Pre-election Polls, regarding the USA elections of 1948.
Resumo:
This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.
Resumo:
The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.
Resumo:
Human Computer Interaction (HCl) is to interaction between computers and each person. And context-aware (CA) is very important one of HCI composition. In particular, if there are sequential or continuous tasks between users and devices, among users, and among devices etc, it is important to decide the next action using right CA. And to take perfect decision we have to get together all CA into a structure. We define that structure is Context-Aware Matrix (CAM) in this article. However to make exact decision is too hard for some problems like low accuracy, overhead and bad context by attacker etc. Many researcher has been studying to solve these problems. Moreover, still it has weak point HCI using in safety. In this Article, we propose CAM making include best selecting Server in each area. As a result, moving users could be taken the best way.
Resumo:
This paper presents a proposal for an automatic vehicle detection and classification (AVDC) system. The proposed AVDC should classify vehicles accordingly to the Portuguese legislation (vehicle height over the first axel and number of axels), and should also support profile based classification. The AVDC should also fulfill the needs of the Portuguese motorway operator, Brisa. For the classification based on the profile we propose:he use of Eigenprofiles, a technique based on Principal Components Analysis. The system should also support multi-lane free flow for future integration in this kind of environments.
Resumo:
Mestrado em Engenharia Geotécnica e Geoambiente