959 resultados para automatic methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents three methods for automatic detection of dust devils tracks in images of Mars. The methods are mainly based on Mathematical Morphology and results of their performance are analyzed and compared. A dataset of 21 images from the surface of Mars representative of the diversity of those track features were considered for developing, testing and evaluating our methods, confronting their outputs with ground truth images made manually. Methods 1 and 3, based on closing top-hat and path closing top-hat, respectively, showed similar mean accuracies around 90% but the time of processing was much greater for method 1 than for method 3. Method 2, based on radial closing, was the fastest but showed worse mean accuracy. Thus, this was the tiebreak factor. © 2011 Springer-Verlag.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose
This study was designed to investigate methods to help patients suffering from unilateral tinnitus synthesizing an auditory replica of their tinnitus.

Materials and methods
Two semi-automatic methods (A and B) derived from the auditory threshold of the patient and a method (C) combining a pure tone and a narrow band-pass noise centred on an adjustable frequency were devised and rated on their likeness over two test sessions. A third test evaluated the stability over time of the synthesized tinnitus replica built with method C, and its proneness to merge with the patient's tinnitus. Patients were then asked to try and control the lateralisation of this single percept through the adjustment of the tinnitus replica level.

Results
The first two tests showed that seven out of ten patients chose the tinnitus replica built with method C as their preferred one. The third test, performed on twelve patients, revealed pitch tuning was rather stable over a week interval. It showed that eight patients were able to consistently match the central frequency of the synthesized tinnitus (presented to the contralateral ear) to their own tinnitus, which leaded to a unique tinnitus percept. The lateralisation displacement was consistent across patients and revealed an average range of 29dB to obtain a full lateral shift from the ipsilateral to the contralateral side.

Conclusions
Although spectrally simpler than the semi-automatic methods, method C could replicate patients' tinnitus, to some extent. When a unique percept between synthesized tinnitus and patients' tinnitus arose, lateralisation of this percept was achieved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this work, a comprehensive review on automatic analysis of Proteomics and Genomics images is presented. Special emphasis is given to a particularly complex image produced by a technique called Two-Dimensional Gel Electrophoresis (2-DE), with thousands of spots (or blobs). Automatic methods for the detection, segmentation and matching of blob like features are discussed and proposed. In particular, a very robust procedure was achieved for processing 2-DE images, consisting mainly of two steps: a) A very trustworthy new approach for the automatic detection and segmentation of spots, based on the Watershed Transform, without any foreknowledge of spot shape or size, and without user intervention; b) A new method for spot matching, based on image registration, that performs well for either global or local distortions. The results of the proposed methods are compared to state-of-the-art academic and commercial products.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Dysfunction of Autonomic Nervous System (ANS) is a typical feature of chronic heart failure and other cardiovascular disease. As a simple non-invasive technology, heart rate variability (HRV) analysis provides reliable information on autonomic modulation of heart rate. The aim of this thesis was to research and develop automatic methods based on ANS assessment for evaluation of risk in cardiac patients. Several features selection and machine learning algorithms have been combined to achieve the goals. Automatic assessment of disease severity in Congestive Heart Failure (CHF) patients: a completely automatic method, based on long-term HRV was proposed in order to automatically assess the severity of CHF, achieving a sensitivity rate of 93% and a specificity rate of 64% in discriminating severe versus mild patients. Automatic identification of hypertensive patients at high risk of vascular events: a completely automatic system was proposed in order to identify hypertensive patients at higher risk to develop vascular events in the 12 months following the electrocardiographic recordings, achieving a sensitivity rate of 71% and a specificity rate of 86% in identifying high-risk subjects among hypertensive patients. Automatic identification of hypertensive patients with history of fall: it was explored whether an automatic identification of fallers among hypertensive patients based on HRV was feasible. The results obtained in this thesis could have implications both in clinical practice and in clinical research. The system has been designed and developed in order to be clinically feasible. Moreover, since 5-minute ECG recording is inexpensive, easy to assess, and non-invasive, future research will focus on the clinical applicability of the system as a screening tool in non-specialized ambulatories, in order to identify high-risk patients to be shortlisted for more complex investigations.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a firsthand comparative evaluation of three different existing methods for selecting a suitable allograft from a bone storage bank. The three examined methods are manual selection, automatic volume-based registration, and automatic surface-based registration. Although the methods were originally published for different bones, they were adapted to be systematically applied on the same data set of hemi-pelvises. A thorough experiment was designed and applied in order to highlight the advantages and disadvantages of each method. The methods were applied on the whole pelvis and on smaller fragments, thus producing a realistic set of clinical scenarios. Clinically relevant criteria are used for the assessment such as surface distances and the quality of the junctions between the donor and the receptor. The obtained results showed that both automatic methods outperform the manual counterpart. Additional advantages of the surface-based method are in the lower computational time requirements and the greater contact surfaces where the donor meets the recipient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acoustic sensing is a promising approach to scaling faunal biodiversity monitoring. Scaling the analysis of audio collected by acoustic sensors is a big data problem. Standard approaches for dealing with big acoustic data include automated recognition and crowd based analysis. Automatic methods are fast at processing but hard to rigorously design, whilst manual methods are accurate but slow at processing. In particular, manual methods of acoustic data analysis are constrained by a 1:1 time relationship between the data and its analysts. This constraint is the inherent need to listen to the audio data. This paper demonstrates how the efficiency of crowd sourced sound analysis can be increased by an order of magnitude through the visual inspection of audio visualized as spectrograms. Experimental data suggests that an analysis speedup of 12× is obtainable for suitable types of acoustic analysis, given that only spectrograms are shown.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As technological capabilities for capturing, aggregating, and processing large quantities of data continue to improve, the question becomes how to effectively utilise these resources. Whenever automatic methods fail, it is necessary to rely on human background knowledge, intuition, and deliberation. This creates demand for data exploration interfaces that support the analytical process, allowing users to absorb and derive knowledge from data. Such interfaces have historically been designed for experts. However, existing research has shown promise in involving a broader range of users that act as citizen scientists, placing high demands in terms of usability. Visualisation is one of the most effective analytical tools for humans to process abstract information. Our research focuses on the development of interfaces to support collaborative, community-led inquiry into data, which we refer to as Participatory Data Analytics. The development of data exploration interfaces to support independent investigations by local communities around topics of their interest presents a unique set of challenges, which we discuss in this paper. We present our preliminary work towards suitable high-level abstractions and interaction concepts to allow users to construct and tailor visualisations to their own needs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

R. Jensen and Q. Shen, 'Fuzzy-Rough Attribute Reduction with Application to Web Categorization,' Fuzzy Sets and Systems, vol. 141, no. 3, pp. 469-485, 2004.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

R. Jensen and Q. Shen, 'Webpage Classification with ACO-enhanced Fuzzy-Rough Feature Selection,' Proceedings of the Fifth International Conference on Rough Sets and Current Trends in Computing (RSCTC 2006), LNAI 4259, pp. 147-156, 2006.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Praca dotyczy wybranych metod pozyskiwania, czyli ekscerpcji, informacji o charakterze leksykalnym z elektronicznych zbiorów tekstów.Jej celem jest, po pierwsze, sformułowanie nowych, oryginalnych metod, które mogą być użyteczne w pozyskiwaniu materiału do analiz leksykalnych, a następnie zbadanie ich na wybranym zbiorze tekstów.Planowano opracowanie metod niewymagających zaawansowanej znajomości programowania komputerowego, a jednocześnie umożliwiających uzyskanie wartościowych wyników, gdzie za wartościowość metody uznaje się daną wydajność ekscerpcyjną. Trzy sformułowane metody dopracowano i zoptymalizowano.Metoda ekscerpcji jednostek nowych dostarczyła ponad 1000 wyrazów nowych, niezarejestrowanych, metoda ekscerpcji kolokacji w oparciu o akronimy daje ponad 6000 jednostek, zaś metoda ekscerpcji kolokacji wykorzystująca końcówkę liczby mnogiej dała ponad 110 tysięcy wyodrębnionych jednostek.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho de Projecto para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Estruturas

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)