935 resultados para HTS - Hough Transform Statistics
Resumo:
Background: The pupillary light reflex characterizes the direct and consensual response of the eye to the perceived brightness of a stimulus. It has been used as indicator of both neurological and optic nerve pathologies. As with other eye reflexes, this reflex constitutes an almost instantaneous movement and is linked to activation of the same midbrain area. The latency of the pupillary light reflex is around 200 ms, although the literature also indicates that the fastest eye reflexes last 20 ms. Therefore, a system with sufficiently high spatial and temporal resolutions is required for accurate assessment. In this study, we analyzed the pupillary light reflex to determine whether any small discrepancy exists between the direct and consensual responses, and to ascertain whether any other eye reflex occurs before the pupillary light reflex. Methods: We constructed a binocular video-oculography system two high-speed cameras that simultaneously focused on both eyes. This was then employed to assess the direct and consensual responses of each eye using our own algorithm based on Circular Hough Transform to detect and track the pupil. Time parameters describing the pupillary light reflex were obtained from the radius time-variation. Eight healthy subjects (4 women, 4 men, aged 24–45) participated in this experiment. Results: Our system, which has a resolution of 15 microns and 4 ms, obtained time parameters describing the pupillary light reflex that were similar to those reported in previous studies, with no significant differences between direct and consensual reflexes. Moreover, it revealed an incomplete reflex blink and an upward eye movement at around 100 ms that may correspond to Bell’s phenomenon. Conclusions: Direct and consensual pupillary responses do not any significant temporal differences. The system and method described here could prove useful for further assessment of pupillary and blink reflexes. The resolution obtained revealed the existence reported here of an early incomplete blink and an upward eye movement.
Resumo:
PEDROSA, Diogo P. F. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. . Uma Proposta de SLAM com Determinação de Informações Geométricas do Ambiente. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 16, Salvador, BA, 2006. Anais... Salvador: CBA, 2006. v. 1. p. 1704-1709
Resumo:
SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.
Resumo:
SANTANA, André M.; SOUZA, Anderson A. S.; BRITTO, Ricardo S.; ALSINA, Pablo J.; MEDEIROS, Adelardo A. D. Localization of a mobile robot based on odometry and natural landmarks using extended Kalman Filter. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
PEDROSA, Diogo P. F. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. . Uma Proposta de SLAM com Determinação de Informações Geométricas do Ambiente. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 16, Salvador, BA, 2006. Anais... Salvador: CBA, 2006. v. 1. p. 1704-1709
Resumo:
SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.
Resumo:
SANTANA, André M.; SOUZA, Anderson A. S.; BRITTO, Ricardo S.; ALSINA, Pablo J.; MEDEIROS, Adelardo A. D. Localization of a mobile robot based on odometry and natural landmarks using extended Kalman Filter. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
At the HL-LHC, proton bunches will cross each other every 25. ns, producing an average of 140 pp-collisions per bunch crossing. To operate in such an environment, the CMS experiment will need a L1 hardware trigger able to identify interesting events within a latency of 12.5. μs. The future L1 trigger will make use also of data coming from the silicon tracker to control the trigger rate. The architecture that will be used in future to process tracker data is still under discussion. One interesting proposal makes use of the Time Multiplexed Trigger concept, already implemented in the CMS calorimeter trigger for the Phase I trigger upgrade. The proposed track finding algorithm is based on the Hough Transform method. The algorithm has been tested using simulated pp-collision data. Results show a very good tracking efficiency. The algorithm will be demonstrated in hardware in the coming months using the MP7, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1. Tb/s.
Resumo:
Tissue-specific extracellular matrix (ECM) is known to be an ideal bioscaffold to inspire the future of regenerative medicine. It holds the secret of how nature has developed such an organization of molecules into a unique functional complexity. This work exploited an innovative image processing algorithm and high resolution microscopy associated with mechanical analysis to establish a correlation between the gradient organization of cartiligous ECM and its anisotropic biomechanical response. This was hypothesized to be a reliable determinant that can elucidate how microarchitecture interrelates with biomechanical properties. Hough-Radon transform of the ECM cross-section images revealed its conformational variation from tangential interface down to subchondral region. As the orientation varied layer by layer, the anisotropic mechanical response deviated relatively. Although, results were in good agreement (Kendall's tau-b > 90%), there were evidences proposing that alignment of the fibrous network, specifically in middle zone, is not as random as it was previously thought.
Resumo:
For certain observing types, such as those that are remotely sensed, the observation errors are correlated and these correlations are state- and time-dependent. In this work, we develop a method for diagnosing and incorporating spatially correlated and time-dependent observation error in an ensemble data assimilation system. The method combines an ensemble transform Kalman filter with a method that uses statistical averages of background and analysis innovations to provide an estimate of the observation error covariance matrix. To evaluate the performance of the method, we perform identical twin experiments using the Lorenz ’96 and Kuramoto-Sivashinsky models. Using our approach, a good approximation to the true observation error covariance can be recovered in cases where the initial estimate of the error covariance is incorrect. Spatial observation error covariances where the length scale of the true covariance changes slowly in time can also be captured. We find that using the estimated correlated observation error in the assimilation improves the analysis.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
In Australia and increasingly worldwide, methamphetamine is one of the most commonly seized drugs analysed by forensic chemists. The current well-established GC/MS methods used to identify and quantify methamphetamine are lengthy, expensive processes, but often rapid analysis is requested by undercover police leading to an interest in developing this new analytical technique. Ninety six illicit drug seizures containing methamphetamine (0.1% - 78.6%) were analysed using Fourier Transform Infrared Spectroscopy with an Attenuated Total Reflectance attachment and Chemometrics. Two Partial Least Squares models were developed, one using the principal Infrared Spectroscopy peaks of methamphetamine and the other a Hierarchical Partial Least Squares model. Both of these models were refined to choose the variables that were most closely associated with the methamphetamine % vector. Both of the models were excellent, with the principal peaks in the Partial Least Squares model having Root Mean Square Error of Prediction 3.8, R2 0.9779 and lower limit of quantification 7% methamphetamine. The Hierarchical Partial Least Squares model had lower limit of quantification 0.3% methamphetamine, Root Mean Square Error of Prediction 5.2 and R2 0.9637. Such models offer rapid and effective methods for screening illicit drug samples to determine the percentage of methamphetamine they contain.
Resumo:
Highly sensitive infrared cameras can produce high-resolution diagnostic images of the temperature and vascular changes of breasts. Wavelet transform based features are suitable in extracting the texture difference information of these images due to their scale-space decomposition. The objective of this study is to investigate the potential of extracted features in differentiating between breast lesions by comparing the two corresponding pectoral regions of two breast thermograms. The pectoral regions of breastsare important because near 50% of all breast cancer is located in this region. In this study, the pectoral region of the left breast is selected. Then the corresponding pectoral region of the right breast is identified. Texture features based on the first and the second sets of statistics are extracted from wavelet decomposed images of the pectoral regions of two breast thermograms. Principal component analysis is used to reduce dimension and an Adaboost classifier to evaluate classification performance. A number of different wavelet features are compared and it is shown that complex non-separable 2D discrete wavelet transform features perform better than their real separable counterparts.