970 resultados para Automatic Peak Detection
Resumo:
We propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. Our algorithm works by estimating the displacements from image patches to the (unknown) landmark positions and then integrating them via voting. The fundamental contribution is that, we jointly estimate the displacements from all patches to multiple landmarks together, by considering not only the training data but also geometric constraints on the test image. The various constraints constitute a convex objective function that can be solved efficiently. Validated on three challenging datasets, our method achieves high accuracy in landmark detection, and, combined with statistical shape model, gives a better performance in shape segmentation compared to the state-of-the-art methods.
Resumo:
Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.
Resumo:
O presente trabalho enquadra-se na temática de segurança contra incêndio em edifícios e consiste num estudo de caso de projeto de deteção e extinção de incêndio num Data Center. Os objetivos deste trabalho resumem-se à realização de um estudo sobre o estado da arte da extinção e deteção automática de incêndio, ao desenvolvimento de uma ferramenta de software de apoio a projetos de extinção por agentes gasosos, como também à realização de um estudo e uma análise da proteção contra incêndios em Data Centers. Por último foi efetuado um estudo de caso. São abordados os conceitos de fogo e de incêndio, em que um estudo teórico à temática foi desenvolvido, descrevendo de que forma pode o fogo ser originado e respetivas consequências. Os regulamentos nacionais relativos à Segurança Contra Incêndios em Edifícios (SCIE) são igualmente abordados, com especial foco nos Sistemas Automáticos de Deteção de Incêndio (SADI) e nos Sistemas Automáticos de Extinção de Incêndio (SAEI), as normas nacionais e internacionais relativas a esta temática também são mencionadas. Pelo facto de serem muito relevantes para o desenvolvimento deste trabalho, os sistemas de deteção de incêndio são exaustivamente abordados, mencionando características de equipamentos de deteção, técnicas mais utilizadas como também quais os aspetos a ter em consideração no dimensionamento de um SADI. Quanto aos meios de extinção de incêndio foram mencionados quais os mais utilizados atualmente, as suas vantagens e a que tipo de fogo se aplicam, com especial destaque para os SAEI com utilização de gases inertes, em que foi descrito como deve ser dimensionado um sistema deste tipo. Foi também efetuada a caracterização dos Data Centers para que seja possível entender quais as suas funcionalidades, a importância da sua existência e os aspetos gerais de uma proteção contra incêndio nestas instalações. Por último, um estudo de caso foi desenvolvido, um SADI foi projetado juntamente com um SAEI que utiliza azoto como gás de extinção. As escolhas e os sistemas escolhidos foram devidamente justificados, tendo em conta os regulamentos e normas em vigor.
Resumo:
Com a massificação do uso da tecnologia no dia-a-dia, os sistemas de localização têm vindo a aumentar a sua popularidade, devido à grande diversidade de funcionalidades que proporcionam e aplicações a que se destinam. No entanto, a maior parte dos sistemas de posicionamento não funcionam adequadamente em ambientes indoor, impedindo o desenvolvimento de aplicações de localização nestes ambientes. Os acelerómetros são muito utilizados nos sistemas de localização inercial, pelas informações que fornecem acerca das acelerações sofridas por um corpo. Para tal, neste trabalho, recorrendo à análise do sinal de aceleração provindo de um acelerómetro, propõe-se uma técnica baseada na deteção de passos para que, em aplicações futuras, possa constituir-se como um recurso a utilizar para calcular a posição do utilizador dentro de um edifício. Neste sentido, este trabalho tem como objetivo contribuir para o desenvolvimento da análise e identificação do sinal de aceleração obtido num pé, por forma a determinar a duração de um passo e o número de passos dados. Para alcançar o objetivo de estudo foram analisados, com recurso ao Matlab, um conjunto de 12 dados de aceleração (para marcha normal, rápida e corrida) recolhidos por um sistema móvel (e provenientes de um acelerómetro). A partir deste estudo exploratório tornou-se possível apresentar um algoritmo baseado no método de deteção de pico e na utilização de filtros de mediana e Butterworth passa-baixo para a contagem de passos, que apresentou bons resultados. Por forma a validar as informações obtidas nesta fase, procedeu-se, seguidamente, à realização de um conjunto de testes experimentais a partir da recolha de 33 novos dados para a marcha e corrida. Identificaram-se o número de passos efetuados, o tempo médio de passo e da passada e a percentagem de erro como as variáveis em estudo. Obteve-se uma percentagem de erro igual a 1% para o total dos dados recolhidos de 20, 100, 500 e 1000 passos com a aplicação do método proposto para a contagem do passo. Não obstante as dificuldades observadas na análise dos sinais de aceleração relativos à corrida, o algoritmo proposto mostrou bom desempenho, conseguindo valores próximos aos esperados. Os resultados obtidos permitem afirmar que foi possível atingir-se o objetivo de estudo com sucesso. Sugere-se, no entanto, o desenvolvimento de futuras investigações de forma a alargar estes resultados em outras direções.
Resumo:
Thesis submitted in the fulfillment of the requirements for the Degree of Master in Biomedical Engineering
Resumo:
Human astroviruses (HAstV) have been increasingly identified as important etiological agents of acute gastroenteritis in children up to five years old. The aim of this study was to determine the prevalence and genotype diversity of HAstV in children with symptomatic and asymptomatic infections in São Luís, Maranhão, Brazil. From June 1997 to July 1999 a total of 183 fecal samples 84 from symptomatic and 99 from asymptomatic children were tested by enzyme immunoassay for HAstV. Prevalence rates were found to be 11 and 3% for symptomatic and asymptomatic children, respectively. Reverse transcription-polymerase chain reaction (RT-PCR) was carried out in 46 specimens (26 symptomatic and 20 asymptomatic) including the 12 samples that were positive by enzyme immunoassay (EIA). The overall positivity yielded by both methods was 8% (15/184); of these, 11% (9/84) for symptomatic and 5% (5/99) for those without symptoms or signs. Sequence analysis of amplicons revealed that HAstV-1 genotype was the most prevalent, accounting for 60% of isolates. Genotypes 2, 3, 4, and 5 were also detected, as one single isolate (10%) for each type. Variations in the sequences were observed when Brazilian isolates were compared to prototype strains identified in the United Kingdom. No seasonal pattern of occurrence was observed during these two years of study, and peak detection rate was observed in children aged between 3 and 6 months in the symptomatic group, and between 18 and 24 months in the controls.
Resumo:
The main information sources to study a particular piece of music are symbolic scores and audio recordings. These are complementary representations of the piece and it isvery useful to have a proper linking between the two of the musically meaningful events. For the case of makam music of Turkey, linking the available scores with the correspondingaudio recordings requires taking the specificities of this music into account, such as the particular tunings, the extensive usage of non-notated expressive elements, and the way in which the performer repeats fragmentsof the score. Moreover, for most of the pieces of the classical repertoire, there is no score written by the original composer. In this paper, we propose a methodology to pair sections of a score to the corresponding fragments of audio recording performances. The pitch information obtained from both sources is used as the common representationto be paired. From an audio recording, fundamental frequency estimation and tuning analysis is done to compute a pitch contour. From the corresponding score, symbolic note names and durations are converted to a syntheticpitch contour. Then, a linking operation is performed between these pitch contours in order to find the best correspondences.The method is tested on a dataset of 11 compositions spanning 44 audio recordings, which are mostly monophonic. An F3-score of 82% and 89% are obtained with automatic and semi-automatic karar detection respectively,showing that the methodology may give us a needed tool for further computational tasks such as form analysis, audio-score alignment and makam recognition.
Resumo:
EEG recordings are usually corrupted by spurious extra-cerebral artifacts, which should be rejected or cleaned up by the practitioner. Since manual screening of human EEGs is inherently error prone and might induce experimental bias, automatic artifact detection is an issue of importance. Automatic artifact detection is the best guarantee for objective and clean results. We present a new approach, based on the time–frequency shape of muscular artifacts, to achieve reliable and automatic scoring. The impact of muscular activity on the signal can be evaluated using this methodology by placing emphasis on the analysis of EEG activity. The method is used to discriminate evoked potentials from several types of recorded muscular artifacts—with a sensitivity of 98.8% and a specificity of 92.2%. Automatic cleaning ofEEGdata are then successfully realized using this method, combined with independent component analysis. The outcome of the automatic cleaning is then compared with the Slepian multitaper spectrum based technique introduced by Delorme et al (2007 Neuroimage 34 1443–9).
Resumo:
Background: Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods: Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results: CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69- 75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion: With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients.
Resumo:
Electroencephalographic (EEG) recordings are, most of the times, corrupted by spurious artifacts, which should be rejected or cleaned by the practitioner. As human scalp EEG screening is error-prone, automatic artifact detection is an issue of capital importance, to ensure objective and reliable results. In this paper we propose a new approach for discrimination of muscular activity in the human scalp quantitative EEG (QEEG), based on the time-frequency shape analysis. The impact of the muscular activity on the EEG can be evaluated from this methodology. We present an application of this scoring as a preprocessing step for EEG signal analysis, in order to evaluate the amount of muscular activity for two set of EEG recordings for dementia patients with early stage of Alzheimer’s disease and control age-matched subjects.
Resumo:
Marketing scholars have suggested a need for more empirical research on consumer response to malls, in order to have a better understanding of the variables that explain the behavior of the consumers. The segmentation methodology CHAID (Chi-square automatic interaction detection) was used in order to identify the profiles of consumers with regard to their activities at malls, on the basis of socio-demographic variables and behavioral variables (how and with whom they go to the malls). A sample of 790 subjects answered an online questionnaire. The CHAID analysis of the results was used to identify the profiles of consumers with regard to their activities at malls. In the set of variables analyzed the transport used in order to go shopping and the frequency of visits to centers are the main predictors of behavior in malls. The results provide guidelines for the development of effective strategies to attract consumers to malls and retain them there.
Resumo:
De nos jours les cartes d’utilisation/occupation du sol (USOS) à une échelle régionale sont habituellement générées à partir d’images satellitales de résolution modérée (entre 10 m et 30 m). Le National Land Cover Database aux États-Unis et le programme CORINE (Coordination of information on the environment) Land Cover en Europe, tous deux fondés sur les images LANDSAT, en sont des exemples représentatifs. Cependant ces cartes deviennent rapidement obsolètes, spécialement en environnement dynamique comme les megacités et les territoires métropolitains. Pour nombre d’applications, une mise à jour de ces cartes sur une base annuelle est requise. Depuis 2007, le USGS donne accès gratuitement à des images LANDSAT ortho-rectifiées. Des images archivées (depuis 1984) et des images acquises récemment sont disponibles. Sans aucun doute, une telle disponibilité d’images stimulera la recherche sur des méthodes et techniques rapides et efficaces pour un monitoring continue des changements des USOS à partir d’images à résolution moyenne. Cette recherche visait à évaluer le potentiel de telles images satellitales de résolution moyenne pour obtenir de l’information sur les changements des USOS à une échelle régionale dans le cas de la Communauté Métropolitaine de Montréal (CMM), une métropole nord-américaine typique. Les études précédentes ont démontré que les résultats de détection automatique des changements dépendent de plusieurs facteurs tels : 1) les caractéristiques des images (résolution spatiale, bandes spectrales, etc.); 2) la méthode même utilisée pour la détection automatique des changements; et 3) la complexité du milieu étudié. Dans le cas du milieu étudié, à l’exception du centre-ville et des artères commerciales, les utilisations du sol (industriel, commercial, résidentiel, etc.) sont bien délimitées. Ainsi cette étude s’est concentrée aux autres facteurs pouvant affecter les résultats, nommément, les caractéristiques des images et les méthodes de détection des changements. Nous avons utilisé des images TM/ETM+ de LANDSAT à 30 m de résolution spatiale et avec six bandes spectrales ainsi que des images VNIR-ASTER à 15 m de résolution spatiale et avec trois bandes spectrales afin d’évaluer l’impact des caractéristiques des images sur les résultats de détection des changements. En ce qui a trait à la méthode de détection des changements, nous avons décidé de comparer deux types de techniques automatiques : (1) techniques fournissant des informations principalement sur la localisation des changements et (2)techniques fournissant des informations à la fois sur la localisation des changements et sur les types de changement (classes « de-à »). Les principales conclusions de cette recherche sont les suivantes : Les techniques de détection de changement telles les différences d’image ou l’analyse des vecteurs de changements appliqués aux images multi-temporelles LANDSAT fournissent une image exacte des lieux où un changement est survenu d’une façon rapide et efficace. Elles peuvent donc être intégrées dans un système de monitoring continu à des fins d’évaluation rapide du volume des changements. Les cartes des changements peuvent aussi servir de guide pour l’acquisition d’images de haute résolution spatiale si l’identification détaillée du type de changement est nécessaire. Les techniques de détection de changement telles l’analyse en composantes principales et la comparaison post-classification appliquées aux images multi-temporelles LANDSAT fournissent une image relativement exacte de classes “de-à” mais à un niveau thématique très général (par exemple, bâti à espace vert et vice-versa, boisés à sol nu et vice-versa, etc.). Les images ASTER-VNIR avec une meilleure résolution spatiale mais avec moins de bandes spectrales que LANDSAT n’offrent pas un niveau thématique plus détaillé (par exemple, boisés à espace commercial ou industriel). Les résultats indiquent que la recherche future sur la détection des changements en milieu urbain devrait se concentrer aux changements du couvert végétal puisque les images à résolution moyenne sont très sensibles aux changements de ce type de couvert. Les cartes indiquant la localisation et le type des changements du couvert végétal sont en soi très utiles pour des applications comme le monitoring environnemental ou l’hydrologie urbaine. Elles peuvent aussi servir comme des indicateurs des changements de l’utilisation du sol. De techniques telles l’analyse des vecteurs de changement ou les indices de végétation son employées à cette fin.
Resumo:
Tradicionalment, la reproducció del mon real se'ns ha mostrat a traves d'imatges planes. Aquestes imatges se solien materialitzar mitjançant pintures sobre tela o be amb dibuixos. Avui, per sort, encara podem veure pintures fetes a ma, tot i que la majoria d'imatges s'adquireixen mitjançant càmeres, i es mostren directament a una audiència, com en el cinema, la televisió o exposicions de fotografies, o be son processades per un sistema computeritzat per tal d'obtenir un resultat en particular. Aquests processaments s'apliquen en camps com en el control de qualitat industrial o be en la recerca mes puntera en intel·ligència artificial. Aplicant algorismes de processament de nivell mitja es poden obtenir imatges 3D a partir d'imatges 2D, utilitzant tècniques ben conegudes anomenades Shape From X, on X es el mètode per obtenir la tercera dimensió, i varia en funció de la tècnica que s'utilitza a tal nalitat. Tot i que l'evolució cap a la càmera 3D va començar en els 90, cal que les tècniques per obtenir les formes tridimensionals siguin mes i mes acurades. Les aplicacions dels escàners 3D han augmentat considerablement en els darrers anys, especialment en camps com el lleure, diagnosi/cirurgia assistida, robòtica, etc. Una de les tècniques mes utilitzades per obtenir informació 3D d'una escena, es la triangulació, i mes concretament, la utilització d'escàners laser tridimensionals. Des de la seva aparició formal en publicacions científiques al 1971 [SS71], hi ha hagut contribucions per solucionar problemes inherents com ara la disminució d'oclusions, millora de la precisió, velocitat d'adquisició, descripció de la forma, etc. Tots i cadascun dels mètodes per obtenir punts 3D d'una escena te associat un procés de calibració, i aquest procés juga un paper decisiu en el rendiment d'un dispositiu d'adquisició tridimensional. La nalitat d'aquesta tesi es la d'abordar el problema de l'adquisició de forma 3D, des d'un punt de vista total, reportant un estat de l'art sobre escàners laser basats en triangulació, provant el funcionament i rendiment de diferents sistemes, i fent aportacions per millorar la precisió en la detecció del feix laser, especialment en condicions adverses, i solucionant el problema de la calibració a partir de mètodes geomètrics projectius.