421 resultados para Shape analysis

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years face recognition systems have been applied in various useful applications, such as surveillance, access control, criminal investigations, law enforcement, and others. However face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence. In this paper a novel liveness detection method, based on the 3D structure of the face, is proposed. Processing the 3D curvature of the acquired data, the proposed approach allows a biometric system to distinguish a real face from a photo, increasing the overall performance of the system and reducing its vulnerability. In order to test the real capability of the methodology a 3D face database has been collected simulating spoofing attacks, therefore using photographs instead of real faces. The experimental results show the effectiveness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We developed and validated a new method to create automated 3D parametric surface models of the lateral ventricles, designed for monitoring degenerative disease effects in clinical neuroscience studies and drug trials. First we used a set of parameterized surfaces to represent the ventricles in a manually labeled set of 9 subjects' MRIs (atlases). We fluidly registered each of these atlases and mesh models to a set of MRIs from 12 Alzheimer's disease (AD) patients and 14 matched healthy elderly subjects, and we averaged the resulting meshes for each of these images. Validation experiments on expert segmentations showed that (1) the Hausdorff labeling error rapidly decreased, and (2) the power to detect disease-related alterations monotonically improved as the number of atlases, N, was increased from 1 to 9. We then combined the segmentations with a radial mapping approach to localize ventricular shape differences in patients. In surface-based statistical maps, we detected more widespread and intense anatomical deficits as we increased the number of atlases, and we formulated a statistical stopping criterion to determine the optimal value of N. Anterior horn anomalies in Alzheimer's patients were only detected with the multi-atlas segmentation, which clearly outperformed the standard single-atlas approach.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper introduces a new method to automate the detection of marine species in aerial imagery using a Machine Learning approach. Our proposed system has at its core, a convolutional neural network. We compare this trainable classifier to a handcrafted classifier based on color features, entropy and shape analysis. Experiments demonstrate that the convolutional neural network outperforms the handcrafted solution. We also introduce a negative training example-selection method for situations where the original training set consists of a collection of labeled images in which the objects of interest (positive examples) have been marked by a bounding box. We show that picking random rectangles from the background is not necessarily the best way to generate useful negative examples with respect to learning.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose in this paper a new method for the mapping of hippocampal (HC) surfaces to establish correspondences between points on HC surfaces and enable localized HC shape analysis. A novel geometric feature, the intrinsic shape context, is defined to capture the global characteristics of the HC shapes. Based on this intrinsic feature, an automatic algorithm is developed to detect a set of landmark curves that are stable across population. The direct map between a source and target HC surface is then solved as the minimizer of a harmonic energy function defined on the source surface with landmark constraints. For numerical solutions, we compute the map with the approach of solving partial differential equations on implicit surfaces. The direct mapping method has the following properties: (1) it has the advantage of being automatic; (2) it is invariant to the pose of HC shapes. In our experiments, we apply the direct mapping method to study temporal changes of HC asymmetry in Alzheimer's disease (AD) using HC surfaces from 12 AD patients and 14 normal controls. Our results show that the AD group has a different trend in temporal changes of HC asymmetry than the group of normal controls. We also demonstrate the flexibility of the direct mapping method by applying it to construct spherical maps of HC surfaces. Spherical harmonics (SPHARM) analysis is then applied and it confirms our results on temporal changes of HC asymmetry in AD.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a shape-space approach for analyzing genetic influences on the shapes of the sulcal folding patterns on the cortex. Sulci are represented as continuously parameterized functions in a shape space, and shape differences between sulci are obtained via geodesics between them. The resulting statistical shape analysis framework is used not only to construct populations averages, but also used to compute meaningful correlations within and across groups of sulcal shapes. More importantly, we present a new algorithm that extends the traditional Euclidean estimate of the intra-class correlation to the geometric shape space, thereby allowing us to study heritability of sulcal shape traits for a population of 193 twin pairs. This new methodology reveals strong genetic influences on the sulcal geometry of the cortex.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Graphene nanoribbon (GNR) with free edges can exhibit non-flat morphologies due to pre-existing edge stress. Using molecular dynamics (MD) simulations, we investigate the free-edge effect on the shape transition in GNRs with different edge types, including regular (armchair and zigzag), armchair terminated with hydrogen and reconstructed armchair. The results show that initial edge stress and energy are dependent on the edge configurations. It is confirmed that pre-strain on the free edges is a possible way to limit the random shape transition of GNRs. In addition, the influence of surface attachment on the shape transition is also investigated in this work. It is found that surface attachment can lead to periodic ripples in GNRs, dependent on the initial edge configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a prototype tracking system for tracking people in enclosed indoor environments where there is a high rate of occlusions. The system uses a stereo camera for acquisition, and is capable of disambiguating occlusions using a combination of depth map analysis, a two step ellipse fitting people detection process, the use of motion models and Kalman filters and a novel fit metric, based on computationally simple object statistics. Testing shows that our fit metric outperforms commonly used position based metrics and histogram based metrics, resulting in more accurate tracking of people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (Nmm−1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived froma single 2D radiographic image. Methods: 18 excised human femora had previously been quantitative computed tomography scanned, from which 2D BMD-equivalent radiographic images were derived, and mechanically tested to failure in a stance-loading configuration. A 3D proximal femur shape was generated from each 2D radiographic image and used to construct 3D-FEA models. Results: The coefficient of determination (R2%) to predict failure load was 54.5% for BMD and 80.4% for 3D-FEXI. Conclusions: This ex vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD. This approach may be readily extended to routine clinical BMD images derived by dual energy X-ray absorptiometry. Crown Copyright © 2009 Published by Elsevier Ltd on behalf of IPEM. All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was warped to the size and shape of a single 2D radiographic image of a subject. Mean absolute depth errors are comparable with previous approaches utilising multiple 2D input projections. Introduction Several approaches have been adopted to derive volumetric density (g cm-3) from a conventional 2D representation of areal bone mineral density (BMD, g cm-2). Such approaches have generally aimed at deriving an average depth across the areal projection rather than creating a formal 3D shape of the bone. Methods Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was subsequently warped to suit the size and shape of a single 2D radiographic image of a subject. CT scans of excised human femora, 18 and 24 scanned at pixel resolutions of 1.08 mm and 0.674 mm, respectively, were equally split into training (created 3D shape template) and test cohorts. Results The mean absolute depth errors of 3.4 mm and 1.73 mm, respectively, for the two CT pixel sizes are comparable with previous approaches based upon multiple 2D input projections. Conclusions This technique has the potential to derive volumetric density from BMD and to facilitate 3D finite element analysis for prediction of the mechanical integrity of the proximal femur. It may further be applied to other anatomical bone sites such as the distal radius and lumbar spine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To investigate associations between the diurnal variation in a range of corneal parameters, including anterior and posterior corneal topography, and regional corneal thickness. ----- Methods: Fifteen subjects had their corneas measured using a rotating Scheimpflug camera (Pentacam) every 3-7 hours over a 24-hour period. Anterior and posterior corneal axial curvature, pachymetry and anterior chamber depth were analysed. The best fitting corneal sphero-cylinder from the axial curvature, and the average corneal thickness for a series of different corneal regions were calculated. Intraocular pressure and axial length were also measured at each measurement session. Repeated measures ANOVA were used to investigate diurnal change in these parameters. Analysis of covariance was used to examine associations between the measured ocular parameters. ----- Results: Significant diurnal variation was found to occur in both the anterior and posterior corneal curvature and in the regional corneal thickness. Flattening of the anterior corneal best sphere was observed at the early morning measurement (p < 0.0001). The posterior cornea also underwent a significant steepening (p < 0.0001) and change in astigmatism 90/180° at this time. A significant swelling of the cornea (p < 0.0001) was also found to occur immediately after waking. Highly significant associations were found between the diurnal variation in corneal thickness and the changes in corneal curvature. ----- Conclusions: Significant diurnal variation occurs in the regional thickness and the shape of the anterior and posterior cornea. The largest changes in the cornea were typically evident upon waking. The observed non-uniform regional corneal thickness changes resulted in a steepening of the posterior cornea, and a flattening of the anterior cornea to occur at this time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates Theatre for Young People (TYP) as a site of performance innovation. The inquiry is focused on contemporary dramaturgy and its fieldwork aims to identify new dramaturgical principles operating in the creation and presentation of TYP. The research then seeks to assess how these new principles contribute to Postdramatic Theatre theory. This research inquiry springs from an imperative based in practice: Young people under 25 years have a literacy based on online hypertextual experiences which take the reader outside the frames of a dramatic narrative and beyond principles such as linearity, dramatic unity, teleology and resolution. As a dramaturg and educator I wanted to understand the new ways that young people engage in cultural products, to identify and utilize the new principles of dramaturgy that are now in evidence. My research examines how two playwright/directors approach their work and the new principles that can be identified in their dramaturgy. The fieldwork is scoped into two case studies: the first on TJ Eckleberg working in Australian Theatre for Young People and the second on Kristo Šagor working in German Children’s and Young People’s Theatre (KJT). These case studies address both types of production dramaturgy - the dramaturgy emergent through process in devised performance making, and that emergent in a performance based on a written playscript. On Case Study One the researcher, as participant observer, worked as production dramaturg on a large scale, site specific performance, observing the dramaturgy in process of its director and chief devisor. On Case Study Two the researcher, as observer and analyst, undertook a performance analysis of three playscripts and productions by a contemporary German playwright and director. Utilizing participant observation, reflective practice and grounded analysis the case studies have identified two new principles animating the dramaturgy of these TYP practitioners, namely ‘displacement’ and ‘installation.’ Taking practice into theory, the thesis concludes by demonstrating how displacement and installation contribute to Postdramatic Theatre’s “arsenal of expressive gestures which serve as theatre’s response to changed social communication under the conditions of generalized communication technologies” (Lehmann, H.-T., 2006, p.23). This research makes an original contribution to knowledge by evidencing that the principles of Postdramatic Theory lie within the practice of contemporary Theatre for Young People. It also contributes valuable research to a specialized, often overlooked terrain, namely Dramaturgy in Theatre for Young People, presented here with a contemporary, international and intercultural perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The closure of large institutions for people with intellectual disability and the subsequent shift to community living has been a feature of social policies in most western democracies for more than two decades. While the move from congregated settings to homes in the community has been heralded as a positive and desirable strategy, deinstitutionalisation has continued to be a controversial policy and practice. This research critically analyses the implementation of a deinstitutionalisation policy called Institutional Reform in the state of Queensland from May 1994 until it was dismantled under a new government in the middle of 1996. A trajectory study of the policy from early conceptualisation through its development, implementation and final extinction was undertaken. Several methods were utilised in the research including the textual analyis of policy documents, discussion papers and newspaper articles, interviews with stakeholders and participant observation. The research draws on theories of discourse and focuses on how discourses of disability shape policy and practice. The thesis outlines a number of implications for policy implementation more generally as well as for disability services. In particular, the theoretical framework builds on Fulcher's (1989) disabling discourses - medical, charity, lay and rights - and identifies two additional discourses of economics and inclusion. The thesis argues that competing disability discourses operated in powerful ways to shape the implementation of the policy and illustrates how older discourses based on fear and prejudice were promoted to positions of dominance and power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an enriched radial point interpolation method (e-RPIM) is developed the for the determination of crack tip fields. In e-RPIM, the conventional RBF interpolation is novelly augmented by the suitable trigonometric basis functions to reflect the properties of stresses for the crack tip fields. The performance of the enriched RBF meshfree shape functions is firstly investigated to fit different surfaces. The surface fitting results have proven that, comparing with the conventional RBF shape function, the enriched RBF shape function has: (1) a similar accuracy to fit a polynomial surface; (2) a much better accuracy to fit a trigonometric surface; and (3) a similar interpolation stability without increase of the condition number of the RBF interpolation matrix. Therefore, it has proven that the enriched RBF shape function will not only possess all advantages of the conventional RBF shape function, but also can accurately reflect the properties of stresses for the crack tip fields. The system of equations for the crack analysis is then derived based on the enriched RBF meshfree shape function and the meshfree weak-form. Several problems of linear fracture mechanics are simulated using this newlydeveloped e-RPIM method. It has demonstrated that the present e-RPIM is very accurate and stable, and it has a good potential to develop a practical simulation tool for fracture mechanics problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To accurately and effectively simulate large deformation is one of the major challenges in numerical modeling of metal forming. In this paper, an adaptive local meshless formulation based on the meshless shape functions and the local weak-form is developed for the large deformation analysis. Total Lagrangian (TL) and the Updated Lagrangian (UL) approaches are used and thoroughly compared each other in computational efficiency and accuracy. It has been found that the developed meshless technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming. In addition, the TL has better computational efficiency than the UL. However, the adaptive analysis is much more efficient using the UL approach than using in the TL approach.