925 resultados para Classification of algebraic curves
Resumo:
Urban regeneration is more and more a “universal issue” and a crucial factor in the new trends of urban planning. It is no longer only an area of study and research; it became part of new urban and housing policies. Urban regeneration involves complex decisions as a consequence of the multiple dimensions of the problems that include special technical requirements, safety concerns, socio-economic, environmental, aesthetic, and political impacts, among others. This multi-dimensional nature of urban regeneration projects and their large capital investments justify the development and use of state-of-the-art decision support methodologies to assist decision makers. This research focuses on the development of a multi-attribute approach for the evaluation of building conservation status in urban regeneration projects, thus supporting decision makers in their analysis of the problem and in the definition of strategies and priorities of intervention. The methods presented can be embedded into a Geographical Information System for visualization of results. A real-world case study was used to test the methodology, whose results are also presented.
Resumo:
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
Resumo:
INTRODUCTION: The correct identification of the underlying cause of death and its precise assignment to a code from the International Classification of Diseases are important issues to achieve accurate and universally comparable mortality statistics These factors, among other ones, led to the development of computer software programs in order to automatically identify the underlying cause of death. OBJECTIVE: This work was conceived to compare the underlying causes of death processed respectively by the Automated Classification of Medical Entities (ACME) and the "Sistema de Seleção de Causa Básica de Morte" (SCB) programs. MATERIAL AND METHOD: The comparative evaluation of the underlying causes of death processed respectively by ACME and SCB systems was performed using the input data file for the ACME system that included deaths which occurred in the State of S. Paulo from June to December 1993, totalling 129,104 records of the corresponding death certificates. The differences between underlying causes selected by ACME and SCB systems verified in the month of June, when considered as SCB errors, were used to correct and improve SCB processing logic and its decision tables. RESULTS: The processing of the underlying causes of death by the ACME and SCB systems resulted in 3,278 differences, that were analysed and ascribed to lack of answer to dialogue boxes during processing, to deaths due to human immunodeficiency virus [HIV] disease for which there was no specific provision in any of the systems, to coding and/or keying errors and to actual problems. The detailed analysis of these latter disclosed that the majority of the underlying causes of death processed by the SCB system were correct and that different interpretations were given to the mortality coding rules by each system, that some particular problems could not be explained with the available documentation and that a smaller proportion of problems were identified as SCB errors. CONCLUSION: These results, disclosing a very low and insignificant number of actual problems, guarantees the use of the version of the SCB system for the Ninth Revision of the International Classification of Diseases and assures the continuity of the work which is being undertaken for the Tenth Revision version.
Resumo:
This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.
Resumo:
Chronic Liver Disease is a progressive, most of the time asymptomatic, and potentially fatal disease. In this paper, a semi-automatic procedure to stage this disease is proposed based on ultrasound liver images, clinical and laboratorial data. In the core of the algorithm two classifiers are used: a k nearest neighbor and a Support Vector Machine, with different kernels. The classifiers were trained with the proposed multi-modal feature set and the results obtained were compared with the laboratorial and clinical feature set. The results showed that using ultrasound based features, in association with laboratorial and clinical features, improve the classification accuracy. The support vector machine, polynomial kernel, outperformed the others classifiers in every class studied. For the Normal class we achieved 100% accuracy, for the chronic hepatitis with cirrhosis 73.08%, for compensated cirrhosis 59.26% and for decompensated cirrhosis 91.67%.
Resumo:
Purpose: To describe and compare the content of instruments that assess environmental factors using the International Classification of Functioning, Disability and Health (ICF). Methods: A systematic search of PubMed, CINAHL and PEDro databases was conducted using a pre-determined search strategy. The identified instruments were screened independently by two investigators, and meaningful concepts were linked to the most precise ICF category according to published linking rules. Results: Six instruments were included, containing 526 meaningful concepts. Instruments had between 20% and 98% of items linked to categories in Chapter 1. The highest percentage of items from one instrument linked to categories in Chapters 2–5 varied between 9% and 50%. The presence or absence of environmental factors in a specific context is assessed in 3 instruments, while the other 3 assess the intensity of the impact of environmental factors. Discussion: Instruments differ in their content, type of assessment, and have several items linked to the same ICF category. Most instruments primarily assess products and technology (Chapter 1), highlighting the need to deepen the discussion on the theory that supports the measurement of environmental factors. This discussion should be thorough and lead to the development of methodologies and new tools that capture the underlying concepts of the ICF.
Resumo:
OBJECTIVE: To develop a Charlson-like comorbidity index based on clinical conditions and weights of the original Charlson comorbidity index. METHODS: Clinical conditions and weights were adapted from the International Classification of Diseases, 10th revision and applied to a single hospital admission diagnosis. The study included 3,733 patients over 18 years of age who were admitted to a public general hospital in the city of Rio de Janeiro, southeast Brazil, between Jan 2001 and Jan 2003. The index distribution was analyzed by gender, type of admission, blood transfusion, intensive care unit admission, age and length of hospital stay. Two logistic regression models were developed to predict in-hospital mortality including: a) the aforementioned variables and the risk-adjustment index (full model); and b) the risk-adjustment index and patient's age (reduced model). RESULTS: Of all patients analyzed, 22.3% had risk scores >1, and their mortality rate was 4.5% (66.0% of them had scores >1). Except for gender and type of admission, all variables were retained in the logistic regression. The models including the developed risk index had an area under the receiver operating characteristic curve of 0.86 (full model), and 0.76 (reduced model). Each unit increase in the risk score was associated with nearly 50% increase in the odds of in-hospital death. CONCLUSIONS: The risk index developed was able to effectively discriminate the odds of in-hospital death which can be useful when limited information is available from hospital databases.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
This chapter analyzes the signals captured during impacts and vibrations of a mechanical manipulator. Eighteen signals are captured and several metrics are calculated between them, such as the correlation, the mutual information and the entropy. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
Mycologia, Vol. 98, nº6
Resumo:
We define families of aperiodic words associated to Lorenz knots that arise naturally as syllable permutations of symbolic words corresponding to torus knots. An algorithm to construct symbolic words of satellite Lorenz knots is defined. We prove, subject to the validity of a previous conjecture, that Lorenz knots coded by some of these families of words are hyperbolic, by showing that they are neither satellites nor torus knots and making use of Thurston's theorem. Infinite families of hyperbolic Lorenz knots are generated in this way, to our knowledge, for the first time. The techniques used can be generalized to study other families of Lorenz knots.
Resumo:
The increase of electricity demand in Brazil, the lack of the next major hydroelectric reservoirs implementation, and the growth of environmental concerns lead utilities to seek an improved system planning to meet these energy needs. The great diversity of economic, social, climatic, and cultural conditions in the country have been causing a more difficult planning of the power system. The work presented in this paper concerns the development of an algorithm that aims studying the influence of the issues mentioned in load curves. Focus is given to residential consumers. The consumption device with highest influence in the load curve is also identified. The methodology developed gains increasing importance in the system planning and operation, namely in the smart grids context.
Resumo:
Studies were made on the biochemical behavior of 100 strains of P.pestis isolated in Northeastern Brazil with regard to production of nitrous acid, reduction of nitrates to nitrltes, and aciáification of glycerol. Results showed that 98 strains can be classified as "orientalis variety", while the remaining two could not be included in any of the existing "varieties".
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.