218 resultados para Gaussian curvature
Resumo:
To analyse and compare standing thoracolumbar curves in normal weight participants and participants with obesity, using an electromagnetic device, and to analyse the measurement reliability. Material and Methods. Cross-sectional study was carried out. 36 individuals were divided into two groups (normal-weight and participants with obesity) according to their waist circumference. The reference points (T1–T8–L1–L5 and both posterior superior iliac spines) were used to perform a description of thoracolumbar curvature in the sagittal and coronal planes. A transformation from the global coordinate system was performed and thoracolumbar curves were adjusted by fifth-order polynomial equations. The tangents of the first and fifth lumbar vertebrae and the first thoracic vertebra were determined from their derivatives. The reliability of the measurement was assessed according to the internal consistency of the measure and the thoracolumbar curvature angles were compared between groups. Results. Cronbach’s alpha values ranged between 0.824 (95% CI: 0.776–0.847) and 0.918 (95% CI: 0.903–0.949). In the coronal plane, no significant differences were found between groups; however, in sagittal plane, significant differences were observed for thoracic kyphosis. Conclusion. There were significant differences in thoracic kyphosis in the sagittal plane between two groups of young adults grouped according to their waist circumference.
Resumo:
Agility is an essential part of many athletic activities. Currently, agility drill duration is the sole criterion used for evaluation of agility performance. The relationship between drill duration and factors such as acceleration, deceleration and change of direction, however, has not been fully explored. This paper provides a mathematical description of the relationship between velocity and radius of curvatures in an agility drill through implementation of a power law (PL). Two groups of skilled and unskilled participants performed a cyclic forward/backward shuttle agility test. Kinematic data was recorded using motion capture system at a sampling rate of 200 Hz. The logarithmic relationship between tangential velocity and radius of curvature of participant trajectories in both groups was established using the PL. The slope of the regression line was found to be 0.26 and 0.36, for the skilled and unskilled groups, respectively. The magnitudes of regression line slope for both groups were approximately 0.3 which is close to the expected 1/3 value. Results are an indication of how the PL could be implemented in an agility drill thus opening the way for establishment of a more representative measure of agility performance instead of drill duration.
Resumo:
High mechanical stress in atherosclerotic plaques at vulnerable sites, called critical stress, contributes to plaque rupture. The site of minimum fibrous cap (FC) thickness (FCMIN) and plaque shoulder are well-documented vulnerable sites. The inherent weakness of the FC material at the thinnest point increases the stress, making it vulnerable, and it is the big curvature of the lumen contour over FC which may result in increased plaque stress. We aimed to assess critical stresses at FCMIN and the maximum lumen curvature over FC (LCMAX) and quantify the difference to see which vulnerable site had the highest critical stress and was, therefore, at highest risk of rupture. One hundred patients underwent high resolution carotid magnetic resonance (MR) imaging. We used 352 MR slices with delineated atherosclerotic components for the simulation study. Stresses at all the integral nodes along the lumen surface were calculated using the finite-element method. FCMIN and LCMAX were identified, and critical stresses at these sites were assessed and compared. Critical stress at FC MIN was significantly lower than that at LCMAX (median: 121.55 kPa; inter quartile range (IQR) = [60.70-180.32] kPa vs. 150.80 kPa; IQR = [91.39-235.75] kPa, p < 0.0001). If critical stress at FCMIN was only used, then the stress condition of 238 of 352 MR slices would be underestimated, while if the critical stress at LCMAX only was used, then 112 out of 352 would be underestimated. Stress analysis at FCMIN and LCMAX should be used for a refined mechanical risk assessment of atherosclerotic plaques, since material failure at either site may result in rupture.
Resumo:
Pseudo-marginal methods such as the grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms have been introduced in the literature as an approach to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we propose to use Gaussian processes (GP) to accelerate the GIMH method, whilst using a short pilot run of MCWM to train the GP. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model.
Resumo:
Purpose Transient changes in corneal topography associated with soft and conventional or reverse geometry rigid contact lens wear have been well documented; however, only a few studies have examined the influence of scleral contact lens wear upon the cornea. Therefore, in this study, we examined the influence of modern miniscleral contact lenses, which land entirely on the sclera and overlying tissues, upon anterior corneal curvature and optics. Methods Anterior corneal topography and elevation data were acquired using Scheimpflug imaging (Pentacam HR, Oculus) immediately prior to and following 8 hours of miniscleral contact lens wear in 15 young healthy adults (mean age 22 ± 3 years, 8 East Asian, 7 Caucasian) with normal corneae. Corneal diurnal variations were accounted for using data collected on a dedicated measurement day without contact lens wear. Corneal clearance was quantified using an optical coherence tomographer (RS-3000, Nidek) following lens insertion and after 8 hours of lens wear. Results Although corneal clearance was maintained throughout the 8 hour lens wear period, significant corneal flattening (up to 0.08 ± 0.04 mm) was observed, primarily in the superior mid-peripheral cornea, which resulted in a slight increase in against-the-rule corneal astigmatism (mean +0.02/-0.15 x 94 for an 8 mm diameter). Higher order aberration terms of horizontal coma, vertical coma and spherical aberration all underwent significant changes for an 8 mm corneal diameter (p ≤ 0.01), which typically resulted in a decrease in RMS error values (mean change in total higher order RMS -0.035 ± 0.046 µm for an 8 mm diameter). There was no association between the magnitude of change in central or mid-peripheral corneal clearance during lens wear and the observed changes in corneal curvature (p > 0.05). However, Asian participants displayed a significantly greater reduction in corneal clearance (p = 0.04) and greater superior-nasal corneal flattening compared to Caucasians (p = 0.048). Conclusions Miniscleral contact lenses that vault the cornea induce significant changes in anterior corneal surface topography and higher order aberrations following 8 hours of lens wear. The region of greatest corneal flattening was observed in the superior-nasal mid-periphery, more so in Asian participants. Practitioners should be aware that corneal measurements obtained following miniscleral lens removal may mask underlying corneal steepening.
Resumo:
The effectiveness of higher-order spectral (HOS) phase features in speaker recognition is investigated by comparison with Mel Cepstral features on the same speech data. HOS phase features retain phase information from the Fourier spectrum unlikeMel–frequency Cepstral coefficients (MFCC). Gaussian mixture models are constructed from Mel– Cepstral features and HOS features, respectively, for the same data from various speakers in the Switchboard telephone Speech Corpus. Feature clusters, model parameters and classification performance are analyzed. HOS phase features on their own provide a correct identification rate of about 97% on the chosen subset of the corpus. This is the same level of accuracy as provided by MFCCs. Cluster plots and model parameters are compared to show that HOS phase features can provide complementary information to better discriminate between speakers.
Resumo:
As a part of vital infrastructure and transportation networks, bridge structures must function safely at all times. However, due to heavier and faster moving vehicular loads and function adjustment, such as Busway accommodation, many bridges are now operating at an overload beyond their design capacity. Additionally, the huge renovation and replacement costs always make the infrastructure owners difficult to undertake. Structural health monitoring (SHM) is set to assess condition and foresee probable failures of designated bridge(s), so as to monitor the structural health of the bridges. The SHM systems proposed recently are incorporated with Vibration-Based Damage Detection (VBDD) techniques, Statistical Methods and Signal processing techniques and have been regarded as efficient and economical ways to solve the problem. The recent development in damage detection and condition assessment techniques based on VBDD and statistical methods are reviewed. The VBDD methods based on changes in natural frequencies, curvature/strain modes, modal strain energy (MSE) dynamic flexibility, artificial neural networks (ANN) before and after damage and other signal processing methods like Wavelet techniques and empirical mode decomposition (EMD) / Hilbert spectrum methods are discussed here.
Resumo:
Scoliosis is a spinal deformity, involving a side-to-side curvature of the spine in the coronal plane as well as a rotation of the spinal column in the transverse plane. The coronal curvature is measured using a Cobb angle. If the deformity is severe, treatment for scoliosis may require surgical intervention whereby a rod is attached to the spinal column to correct the abnormal curvature. In order to provide surgeons with an improved ability to predict the likely outcomes following surgery, techniques to create patient-specific finite element models (FEM) of scoliosis patients treated at the Mater Children’s Hospital (MCH) in Brisbane are being developed and validated. This paper presents a comparison of the simulated and clinical data for a scoliosis patient treated at MCH.
Resumo:
Adolescent idiopathic scoliosis (AIS) is the most common form of spinal deformity in paediatrics, prevalent in approximately 2-4% of the general population. While it is a complex three-dimensional deformity, it is clinically characterised by an abnormal lateral curvature of the spine. The treatment for severe deformity is surgical correction with the use of structural implants. Anterior single rod correction employs a solid rod connected to the anterior spine via vertebral body screws. Correction is achieved by applying compression between adjacent vertebral body screws, before locking each screw onto the rod. Biomechanical complication rates have been reported as high as 20.8%, and include rod breakage, screw pull-out and loss of correction. Currently, the corrective forces applied to the spine are unknown. These forces are important variables to consider in understanding the biomechanics of scoliosis correction. The purpose of this study was to measure these forces intra-operatively during anterior single rod AIS correction.
Resumo:
To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.
Resumo:
The problem of impostor dataset selection for GMM-based speaker verification is addressed through the recently proposed data-driven background dataset refinement technique. The SVM-based refinement technique selects from a candidate impostor dataset those examples that are most frequently selected as support vectors when training a set of SVMs on a development corpus. This study demonstrates the versatility of dataset refinement in the task of selecting suitable impostor datasets for use in GMM-based speaker verification. The use of refined Z- and T-norm datasets provided performance gains of 15% in EER in the NIST 2006 SRE over the use of heuristically selected datasets. The refined datasets were shown to generalise well to the unseen data of the NIST 2008 SRE.
Resumo:
We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.
Resumo:
Suggestions that peripheral imagery may affect the development of refractive error have led to interest in the variation in refraction and aberration across the visual field. It is shown that, if the optical system of the eye is rotationally symmetric about an optical axis which does not coincide with the visual axis, measurements of refraction and aberration made along the horizontal and vertical meridians of the visual field will show asymmetry about the visual axis. The departures from symmetry are modelled for second-order aberrations, refractive components and third-order coma. These theoretical results are compared with practical measurements from the literature. The experimental data support the concept that departures from symmetry about the visual axis in the measurements of crossed-cylinder astigmatism J45 and J180 are largely explicable in terms of a decentred optical axis. Measurements of the mean sphere M suggest, however, that the retinal curvature must differ in the horizontal and vertical meridians.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.