917 resultados para Pattern-recognition Methods
Resumo:
Deformable Template models are first applied to track the inner wall of coronary arteries in intravascular ultrasound sequences, mainly in the assistance to angioplasty surgery. A circular template is used for initializing an elliptical deformable model to track wall deformation when inflating a balloon placed at the tip of the catheter. We define a new energy function for driving the behavior of the template and we test its robustness both in real and synthetic images. Finally we introduce a framework for learning and recognizing spatio-temporal geometric constraints based on Principal Component Analysis (eigenconstraints).
Resumo:
In this paper, we propose two Bayesian methods for detecting and grouping junctions. Our junction detection method evolves from the Kona approach, and it is based on a competitive greedy procedure inspired in the region competition method. Then, junction grouping is accomplished by finding connecting paths between pairs of junctions. Path searching is performed by applying a Bayesian A* algorithm that has been recently proposed. Both methods are efficient and robust, and they are tested with synthetic and real images.
Resumo:
Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.
Resumo:
"Supported in part by Contract AT(11-1) 1018 with the U.S. Atomic Energy Commission and the Advanced Research Projects Agency."
Resumo:
Supported by: Contract AT (11-1)-1018 with the U.S. Atomic Energy Commission and the Advanced Research Projects Agency.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.
Resumo:
The prediction of regulatory elements is a problem where computational methods offer great hope. Over the past few years, numerous tools have become available for this task. The purpose of the current assessment is twofold: to provide some guidance to users regarding the accuracy of currently available tools in various settings, and to provide a benchmark of data sets for assessing future tools.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
Machine learning techniques have been recognized as powerful tools for learning from data. One of the most popular learning techniques, the Back-Propagation (BP) Artificial Neural Networks, can be used as a computer model to predict peptides binding to the Human Leukocyte Antigens (HLA). The major advantage of computational screening is that it reduces the number of wet-lab experiments that need to be performed, significantly reducing the cost and time. A recently developed method, Extreme Learning Machine (ELM), which has superior properties over BP has been investigated to accomplish such tasks. In our work, we found that the ELM is as good as, if not better than, the BP in term of time complexity, accuracy deviations across experiments, and most importantly - prevention from over-fitting for prediction of peptide binding to HLA.
Resumo:
Aim: Polysomnography (PSG) is the current standard protocol for sleep disordered breathing (SDB) investigation in children. Presently, there are limited reliable screening tests for both central (CE) and obstructive (OE) respiratory events. This study compared three indices, derived from pulse oximetry and electrocardiogram ( ECG), with the PSG gold standard. These indices were heart rate (HR) variability, arterial blood oxygen de-saturation (SaO(2)) and pulse transit time (PTT). Methods: 15 children (12 male) from routine PSG studies were recruited (aged 3 - 14 years). The characteristics of the three indices were based on known criteria for respiratory events (RPE). Their estimation singly and in combination was evaluated with simultaneous scored PSG recordings. Results: 215 RPE and 215 tidal breathing events were analysed. For OE, the obtained sensitivity was HR (0.703), SaO(2) (0.047), PTT (0.750), considering all three indices (0) and either of the indices (0.828) while specificity was (0.891), (0.938), (0.922), (0.953) and (0.859) respectively. For CE, the sensitivity was HR (0.715), SaO(2) (0.278), PTT (0.662), considering all indices (0.040) and either of the indices (0.868) while specificity was (0.815), (0.954), (0.901), (0.960) and (0.762) accordingly. Conclusions: Preliminary findings herein suggest that the later combination of these non-invasive indices to be a promising screening method of SDB in children.
Resumo:
Seven years of multi-environment yield trials of navy bean (Phaseolus vulgaris L.) grown in Queensland were examined. As is common with plant breeding evaluation trials, test entries and locations varied between years. Grain yield data were analysed for each year using cluster and ordination analyses (pattern analyses). These methods facilitate descriptions of genotype performance across environments and the discrimination among genotypes provided by the environments. The observed trends for genotypic yield performance across environments were partly consistent with agronomic and disease reactions at specific environments and also partly explainable by breeding and selection history. In some cases, similarities in discrimination among environments were related to geographic proximity, in others management practices, and in others similarities occurred between geographically widely separated environments which differed in management practices. One location was identified as having atypical line discrimination. The analysis indicated that the number of test locations was below requirements for adequate representation of line x environment interaction. The pattern analyses methods used were an effective aid in describing the patterns in data for each year and illustrated the variations in adaptive patterns from year to year. The study has implications for assessing the number and location of test sites for plant breeding multi-environment trials, and for the understanding of genetic traits contributing to line x environment interactions.
Resumo:
In this paper, we present a new scheme for off-line recognition of multi-font numerals using the Takagi-Sugeno (TS) model. In this scheme, the binary image of a character is partitioned into a fixed number of sub-images called boxes. The features consist of normalized vector distances (gamma) from each box. Each feature extracted from different fonts gives rise to a fuzzy set. However, when we have a small number of fonts as in the case of multi-font numerals, the choice of a proper fuzzification function is crucial. Hence, we have devised a new fuzzification function involving parameters, which take account of the variations in the fuzzy sets. The new fuzzification function is employed in the TS model for the recognition of multi-font numerals.