995 resultados para Classification tests
Resumo:
Urban regeneration is more and more a “universal issue” and a crucial factor in the new trends of urban planning. It is no longer only an area of study and research; it became part of new urban and housing policies. Urban regeneration involves complex decisions as a consequence of the multiple dimensions of the problems that include special technical requirements, safety concerns, socio-economic, environmental, aesthetic, and political impacts, among others. This multi-dimensional nature of urban regeneration projects and their large capital investments justify the development and use of state-of-the-art decision support methodologies to assist decision makers. This research focuses on the development of a multi-attribute approach for the evaluation of building conservation status in urban regeneration projects, thus supporting decision makers in their analysis of the problem and in the definition of strategies and priorities of intervention. The methods presented can be embedded into a Geographical Information System for visualization of results. A real-world case study was used to test the methodology, whose results are also presented.
Resumo:
This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. This study is defined as study of diagnostic performance. The diagnostic accuracy of 5 visual tests -near convergence point, near accommodation point, stereopsis, contrast sensibility and amsler grid—was evaluated by means of the ROC method (receiver operating characteristics curves), sensitivity, specificity, positive and negative likelihood ratios (LR+/LR−). Visual acuity was used as the reference standard. A sample of 44 elderly aged 76.7 years (±9.32), who were institutionalized, was collected. The curves of contrast sensitivity and stereopsis are the most accurate (area under the curves were 0.814−p = 0.001, C.I.95%[0.653;0.975]— and 0.713−p = 0.027, C.I.95%[0,540;0,887], respectively). The scores with the best diagnostic validity for the stereopsis test were 0.605 (sensitivity 0.87, specificity 0.54; LR+ 1.89, LR−0.24) and 0.610 (sensitivity 0.81, specificity 0.54; LR+1.75, LR−0.36). The scores with higher diagnostic validity for the contrast sensibility test were 0.530 (sensitivity 0.94, specificity 0.69; LR+ 3.04, LR−0.09). The contrast sensitivity and stereopsis test's proved to be clinically useful in detecting vision loss in the elderly.
Resumo:
The aging of Portuguese population is characterized by an increase of individuals aged older than 65 years. Preventable visual loss in older persons is an important public health problem. Tests used for vision screening should have a high degree of diagnostic validity confirmed by means of clinical trials. The primary aim of a screening program is the early detection of visual diseases. Between 20% and 50% of older people in the UK have undetected reduced vision and in most cases is correctable. Elderly patients do not receive a systematic eye examination unless a problem arises with their glasses or suspicion vision loss. This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. Furthermore, it pretends to define the ability to find the subjects affected with vision loss as positive and the subjects not affected with the same disease as negative. The ideal vision screening method should have high sensitivity and specificity for early detection of risk factors. It should be also low cost and easy to implement in all geographic and socioeconomic regions. Sensitivity is the ability of an examination to identify the presence of a given disease and specificity is the ability of the examination to identify the absence of a given disease. It was not an aim of this study to detect abnormalities that affect visual acuity. The aim of this study was to find out what´s the best test for the identification of any vision loss.
Resumo:
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
Resumo:
O trabalho que a seguir se apresenta tem como objectivo descrever a criação de um modelo que sirva de suporte a um sistema de apoio à decisão sobre o risco inerente à execução de projectos na área das Tecnologias de Informação (TI) recorrendo a técnicas de mineração de dados. Durante o ciclo de vida de um projecto, existem inúmeros factores que contribuem para o seu sucesso ou insucesso. A responsabilidade de monitorizar, antever e mitigar esses factores recai sobre o Gestor de Projecto. A gestão de projectos é uma tarefa difícil e dispendiosa, consome muitos recursos, depende de numerosas variáveis e, muitas vezes, até da própria experiência do Gestor de Projecto. Ao ser confrontado com as previsões de duração e de esforço para a execução de uma determinada tarefa, o Gestor de Projecto, exceptuando a sua percepção e intuição pessoal, não tem um modo objectivo de medir a plausibilidade dos valores que lhe são apresentados pelo eventual executor da tarefa. As referidas previsões são fundamentais para a organização, pois sobre elas são tomadas as decisões de planeamento global estratégico corporativo, de execução, de adiamento, de cancelamento, de adjudicação, de renegociação de âmbito, de adjudicação externa, entre outros. Esta propensão para o desvio, quando detectada numa fase inicial, pode ajudar a gerir melhor o risco associado à Gestão de Projectos. O sucesso de cada projecto terminado foi qualificado tendo em conta a ponderação de três factores: o desvio ao orçamentado, o desvio ao planeado e o desvio ao especificado. Analisando os projectos decorridos, e correlacionando alguns dos seus atributos com o seu grau de sucesso o modelo classifica, qualitativamente, um novo projecto quanto ao seu risco. Neste contexto o risco representa o grau de afastamento do projecto ao sucesso. Recorrendo a algoritmos de mineração de dados, tais como, árvores de classificação e redes neuronais, descreve-se o desenvolvimento de um modelo que suporta um sistema de apoio à decisão baseado na classificação de novos projectos. Os modelos são o resultado de um extensivo conjunto de testes de validação onde se procuram e refinam os indicadores que melhor caracterizam os atributos de um projecto e que mais influenciam o risco. Como suporte tecnológico para o desenvolvimento e teste foi utilizada a ferramenta Weka 3. Uma boa utilização do modelo proposto possibilitará a criação de planos de contingência mais detalhados e uma gestão mais próxima para projectos que apresentem uma maior propensão para o risco. Assim, o resultado final pretende constituir mais uma ferramenta à disposição do Gestor de Projecto.
Resumo:
INTRODUCTION: The correct identification of the underlying cause of death and its precise assignment to a code from the International Classification of Diseases are important issues to achieve accurate and universally comparable mortality statistics These factors, among other ones, led to the development of computer software programs in order to automatically identify the underlying cause of death. OBJECTIVE: This work was conceived to compare the underlying causes of death processed respectively by the Automated Classification of Medical Entities (ACME) and the "Sistema de Seleção de Causa Básica de Morte" (SCB) programs. MATERIAL AND METHOD: The comparative evaluation of the underlying causes of death processed respectively by ACME and SCB systems was performed using the input data file for the ACME system that included deaths which occurred in the State of S. Paulo from June to December 1993, totalling 129,104 records of the corresponding death certificates. The differences between underlying causes selected by ACME and SCB systems verified in the month of June, when considered as SCB errors, were used to correct and improve SCB processing logic and its decision tables. RESULTS: The processing of the underlying causes of death by the ACME and SCB systems resulted in 3,278 differences, that were analysed and ascribed to lack of answer to dialogue boxes during processing, to deaths due to human immunodeficiency virus [HIV] disease for which there was no specific provision in any of the systems, to coding and/or keying errors and to actual problems. The detailed analysis of these latter disclosed that the majority of the underlying causes of death processed by the SCB system were correct and that different interpretations were given to the mortality coding rules by each system, that some particular problems could not be explained with the available documentation and that a smaller proportion of problems were identified as SCB errors. CONCLUSION: These results, disclosing a very low and insignificant number of actual problems, guarantees the use of the version of the SCB system for the Ninth Revision of the International Classification of Diseases and assures the continuity of the work which is being undertaken for the Tenth Revision version.
Resumo:
Multiple-Choice items are used in many different kinds of tests in several areas of knowledge. They can be considered an interesting tool to the self-assessing or as an alternative or complementary instrument to the traditional methods for assessing knowledge. The objectivity and accuracy of the multiple-choice tests is an important reason to think about. They are especially useful when the number of students to evaluate is too large. Moodle (Modular Object-Oriented Dynamic Learning Environment) is an Open Source course management system centered around learners' needs and designed to support collaborative approaches to teaching and learning. Moodle offers to the users a rich interface, context-specific help buttons, and a wide variety of tools such as discussion forums, wikis, chat, surveys, quizzes, glossaries, journals, grade books and more, that allow them to learn and collaborate in a truly interactive space. Come together the interactivity of the Moodle platform and the objectivity of this kind of tests one can easily build manifold random tests. The proposal of this paper is to relate our journey in the construction of these tests and share our experience in the use of the Moodle platform to create, take advantage and improve the multiple-choices tests in the Mathematic area.
Resumo:
The purpose of this paper is to analyse if Multiple-Choice Tests may be considered an interesting alternative for assessing knowledge, particularly in the Mathematics area, as opposed to the traditional methods, such as open questions exams. In this sense we illustrate some opinions of the researchers in this area. Often the perception of the people about the construction of this kind of exams is that they are easy to create. But it is not true! Construct well written tests it’s a hard work and needs writing ability from the teachers. Our proposal is analyse the construction difficulties of multiple - choice tests as well some advantages and limitations of this type of tests. We also show the frequent critics and worries, since the beginning of this objective format usage. Finally in this context some examples of Multiple-Choice Items in the Mathematics area are given, and we illustrate as how we can take advantage and improve this kind of tests.
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
This paper presents an integrated system for vehicle classification. This system aims to classify vehicles using different approaches: 1) based on the height of the first axle and_the number of axles; 2) based on volumetric measurements and; 3) based on features extracted from the captured image of the vehicle. The system uses a laser sensor for measurements and a set of image analysis algorithms to compute some visual features. By combining different classification methods, it is shown that the system improves its accuracy and robustness, enabling its usage in more difficult environments satisfying the proposed requirements established by the Portuguese motorway contractor BRISA.
Resumo:
In music genre classification, most approaches rely on statistical characteristics of low-level features computed on short audio frames. In these methods, it is implicitly considered that frames carry equally relevant information loads and that either individual frames, or distributions thereof, somehow capture the specificities of each genre. In this paper we study the representation space defined by short-term audio features with respect to class boundaries, and compare different processing techniques to partition this space. These partitions are evaluated in terms of accuracy on two genre classification tasks, with several types of classifiers. Experiments show that a randomized and unsupervised partition of the space, used in conjunction with a Markov Model classifier lead to accuracies comparable to the state of the art. We also show that unsupervised partitions of the space tend to create less hubs.
Resumo:
This paper describes a methodology that was developed for the classification of Medium Voltage (MV) electricity customers. Starting from a sample of data bases, resulting from a monitoring campaign, Data Mining (DM) techniques are used in order to discover a set of a MV consumer typical load profile and, therefore, to extract knowledge regarding to the electric energy consumption patterns. In first stage, it was applied several hierarchical clustering algorithms and compared the clustering performance among them using adequacy measures. In second stage, a classification model was developed in order to allow classifying new consumers in one of the obtained clusters that had resulted from the previously process. Finally, the interpretation of the discovered knowledge are presented and discussed.
Resumo:
The growing importance and influence of new resources connected to the power systems has caused many changes in their operation. Environmental policies and several well know advantages have been made renewable based energy resources largely disseminated. These resources, including Distributed Generation (DG), are being connected to lower voltage levels where Demand Response (DR) must be considered too. These changes increase the complexity of the system operation due to both new operational constraints and amounts of data to be processed. Virtual Power Players (VPP) are entities able to manage these resources. Addressing these issues, this paper proposes a methodology to support VPP actions when these act as a Curtailment Service Provider (CSP) that provides DR capacity to a DR program declared by the Independent System Operator (ISO) or by the VPP itself. The amount of DR capacity that the CSP can assure is determined using data mining techniques applied to a database which is obtained for a large set of operation scenarios. The paper includes a case study based on 27,000 scenarios considering a diversity of distributed resources in a 33 bus distribution network.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde - Área de especialização: Imagem Digital por Radiação X.
Resumo:
Mestrado em Engenharia Geotécnica e Geoambiente