986 resultados para Split application
Resumo:
In this paper, we develop the switching controller presented by Lee et al. for the pose control of a car-like vehicle, to allow the use of an omnidirectional vision sensor. To this end we incorporate an extension to a hypothesis on the navigation behaviour of the desert ant, cataglyphis bicolor, which leads to a correspondence free landmark based vision technique. The method we present allows positioning to a learnt location based on feature bearing angle and range discrepancies between the robot's current view of the environment, and that at a learnt location. We present simulations and experimental results, the latter obtained using our outdoor mobile platform.
Resumo:
The objective of this paper is to provide an overview of mine automation applications, developed at the Queensland Centre for Advanced Technology (QCAT), which make use of IEEE 802.11b wireless local area networks (WLANs). The paper has been prepared for a 2002 conference entitled "Creating the Virtual Enterprise - Leveraging wireless technology within existing business models for corporate advantage". Descriptions of the WLAN components have been omitted here as such details are presented in the accompanying papers. The structure of the paper is as follows. Application overviews are provided in Sections 2 to 7. Some pertinent strengths and weaknesses are summarised in Section 8. Please refer to http://www.mining-automation.com/ or contact the authors for further information.
Resumo:
We present a novel approach for preprocessing systems of polynomial equations via graph partitioning. The variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the corresponding system of equations can be split into smaller ones that can be solved individually. This can provide a tremendous speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting certain vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations would be separated into smaller systems of near-equal sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of finding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach in algebraic cryptanalysis on symmetric ciphers are presented: For the QUAD family of stream ciphers, we show how a malicious party can manufacture conforming systems that can be easily broken. For the stream ciphers Bivium and Trivium, we nachieve significant speedups in algebraic attacks against them, mainly in a partial key guess scenario. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method. These results may open a new avenue for evaluating the security of symmetric ciphers against algebraic attacks.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
The aim of this study was to evaluate the healing of class III furcation defects following transplantation of autogenous periosteal cells combined with b-tricalcium phosphate (b-TCP). Periosteal cells obtained from Beagle dogs’ periosteum explant cultures, were inoculated onto the surface of b-TCP. Class III furcation defects were created in the mandibular premolars. Three experimental groups were used to test the defects’ healing: group A, b-TCP seeded with periosteal cells were transplanted into the defects; group B, b-TCP alone was used for defect filling; and group C, the defect was without filling materials. Twelve weeks post surgery, the tissue samples were collected for histology, immunohistology and X-ray examination. It was found that both the length of newly formed periodontal ligament and the area of newly formed alveolar bone in group A, were significantly increased compared with both group B and C. Furthermore, both the proportion of newly formed periodontal ligament and newly formed alveolar bone in group A were much higher than those of group B and C. The quantity of cementum and its percentage in the defects (group A) were also significantly higher than those of group C. These results indicate that autogenous periosteal cells combined with b-TCP application can improve periodontal tissue regeneration in class III furcation defects.
Resumo:
Traditional speech enhancement methods optimise signal-level criteria such as signal-to-noise ratio, but such approaches are sub-optimal for noise-robust speech recognition. Likelihood-maximising (LIMA) frameworks on the other hand, optimise the parameters of speech enhancement algorithms based on state sequences generated by a speech recogniser for utterances of known transcriptions. Previous applications of LIMA frameworks have generated a set of global enhancement parameters for all model states without taking in account the distribution of model occurrence, making optimisation susceptible to favouring frequently occurring models, in particular silence. In this paper, we demonstrate the existence of highly disproportionate phonetic distributions on two corpora with distinct speech tasks, and propose to normalise the influence of each phone based on a priori occurrence probabilities. Likelihood analysis and speech recognition experiments verify this approach for improving ASR performance in noisy environments.
Resumo:
Porphyrins are one of Nature’s essential building blocks that play an important role in several biological systems including oxygen transport, photosynthesis, and enzymes. Their capacity to absorb visible light, facilitate oxidation and reduction, and act as energy- and electron-transfer agents, in particular when several are held closely together, is of interest to chemists who seek to mimic Nature and to make and use these compounds in order to synthesise novel advanced materials. During this project 26 new 5,10-diarylsubstituted porphyrin monomers, 10 dimers, and 1 tetramer were synthesised. The spectroscopic and structural properties of these compounds were investigated using 1D/2D 1H NMR, UV/visible, ATR-IR and Raman spectroscopy, mass spectrometry, X-ray crystallography, electrochemistry and gel permeation chromatography. Nitration, amination, bromination and alkynylation of only one as well as both of the meso positions of the porphyrin monomers have resulted in the expansion of the synthetic possibilities for the 5,10-diarylsubstituted porphyrins. The development of these new porphyrin monomers has led to the successful synthesis of new azo- and butadiyne-linked dimers. The functionalisation of these compounds was investigated, in particular nitration, amination, and bromination. The synthesised dimers containing the azo bridge have absorption spectra that show a large split in the Soret bands and intense Q-bands that have been significantly redshifted. The butadiyne dimers also have intense, red-shifted Q-bands but smaller Soret band splittings. Crystal structures of two new azoporphyrins have been acquired and compared to the azoporphyrin previously synthesised from 5,10,15- triarylsubstituted porphyrin monomers. A completely new cyclic porphyrin oligomer (CPO) was synthesised comprising four porphyrin monomers linked by azo and butadiyne bridges. This is the first cyclic tetramer that has both the azo and butadiyne linking groups. The absorption spectrum of the tetramer exhibits a large Soret split making it more similar to the azo- dimers than the butadiyne-linked dimers. The spectroscopic characteristics of the synthesised tetramer have been compared to the characteristics of other cyclic porphyrin tetramers. The collected data indicate that the new synthesised cyclic tetramer has a more efficient ð-overlap and a better ground state electronic communication between the porphyrin rings.
Resumo:
Digital Scenography and traditional Stage Design for the US premiere of Split Britches "The Lost Lounge" - Lois Weaver and Peggy Shaw, Dixons Place New York, December 2009 Digital Scenography and traditional Stage Design for the UK premiere of Split Britches "The Lost Lounge" - Lois Weaver and Peggy Shaw, The Great Hall, Peoples Palace, London, March 2010
Resumo:
This paper, underpinned by a framework of autopoietic principles of creativity/innovation and leadership/governance, argues that open forms of creativity in ‘arts’ provide opportunity for impact upon concepts of development, leadership and governance. The alliance of creativity and governance suggests that by examining various understandings of artistic experiences, readers may perceive new understandings of alliance, application and assessment of such experiences. This critical understanding would include assessing whether such experience supports people changing their aspirations as they become what they want to be. Such understanding may also suggest that different applications of the creative capacity of the ‘arts’ offers relevance in alleged ‘non-creative’ areas of academe, particularly in areas of management, leadership and governance. This alliance also offers the possibility of new staff development programs that facilitate learning and building of individual capacity, as well as facilitate congruent development process and policy, particularly within academic organisational structures.
Resumo:
Although many different materials, techniques and methods, including artificial or engineered bone substitutes, have been used to repair various bone defects, the restoration of critical-sized bone defects caused by trauma, surgery or congenital malformation is still a great challenge to orthopedic surgeons. One important fact that has been neglected in the pursuit of resolutions for large bone defect healing is that most physiological bone defect healing needs the periosteum and stripping off the periosteum may result in non-union or non-healed bone defects. Periosteum plays very important roles not only in bone development but also in bone defect healing. The purpose of this project was to construct a functional periosteum in vitro using a single stem cell source and then test its ability to aid the repair of critical-sized bone defect in animal models. This project was designed with three separate but closely-linked parts which in the end led to four independent papers. The first part of this study investigated the structural and cellular features in periostea from diaphyseal and metaphyseal bone surfaces in rats of different ages or with osteoporosis. Histological and immunohistological methods were used in this part of the study. Results revealed that the structure and cell populations in periosteum are both age-related and site-specific. The diaphyseal periosteum showed age-related degeneration, whereas the metaphyseal periosteum is more destructive in older aged rats. The periosteum from osteoporotic bones differs from normal bones both in terms of structure and cell populations. This is especially evident in the cambial layer of the metaphyseal area. Bone resorption appears to be more active in the periosteum from osteoporotic bones, whereas bone formation activity is comparable between the osteoporotic and normal bone. The dysregulation of bone resorption and formation in the periosteum may also be the effect of the interaction between various neural pathways and the cell populations residing within it. One of the most important aspects in periosteum engineering is how to introduce new blood vessels into the engineered periosteum to help form vascularized bone tissues in bone defect areas. The second part of this study was designed to investigate the possibility of differentiating bone marrow stromal cells (BMSCs) into the endothelial cells and using them to construct vascularized periosteum. The endothelial cell differentiation of BMSCs was induced in pro-angiogenic media under both normoxia and CoCl2 (hypoxia-mimicking agent)-induced hypoxia conditions. The VEGF/PEDF expression pattern, endothelial cell specific marker expression, in vitro and in vivo vascularization ability of BMSCs cultured in different situations were assessed. Results revealed that BMSCs most likely cannot be differentiated into endothelial cells through the application of pro-angiogenic growth factors or by culturing under CoCl2-induced hypoxic conditions. However, they may be involved in angiogenesis as regulators under both normoxia and hypoxia conditions. Two major angiogenesis-related growth factors, VEGF (pro-angiogenic) and PEDF (anti-angiogenic) were found to have altered their expressions in accordance with the extracellular environment. BMSCs treated with the hypoxia-mimicking agent CoCl2 expressed more VEGF and less PEDF and enhanced the vascularization of subcutaneous implants in vivo. Based on the findings of the second part, the CoCl2 pre-treated BMSCs were used to construct periosteum, and the in vivo vascularization and osteogenesis of the constructed periosteum were assessed in the third part of this project. The findings of the third part revealed that BMSCs pre-treated with CoCl2 could enhance both ectopic and orthotopic osteogenesis of BMSCs-derived osteoblasts and vascularization at the early osteogenic stage, and the endothelial cells (HUVECs), which were used as positive control, were only capable of promoting osteogenesis after four-weeks. The subcutaneous area of the mouse is most likely inappropriate for assessing new bone formation on collagen scaffolds. This study demonstrated the potential application of CoCl2 pre-treated BMSCs in the tissue engineering not only for periosteum but also bone or other vascularized tissues. In summary, the structure and cell populations in periosteum are age-related, site-specific and closely linked with bone health status. BMSCs as a stem cell source for periosteum engineering are not endothelial cell progenitors but regulators, and CoCl2-treated BMSCs expressed more VEGF and less PEDF. These CoCl2-treated BMSCs enhanced both vascularization and osteogenesis in constructed periosteum transplanted in vivo.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
This research project examines the application of the Suzuki Actor Training Method (the Suzuki Method) within the work ofTadashi Suzuki's company in Japan, the Shizuoka Performing Arts Complex (SPAC), within the work of Brisbane theatre company Frank:Austral Asian Performance Ensemble (Frank:AAPE), and as related to the development of the theatre performance Surfacing. These three theatrical contexts have been studied from the viewpoint of a "participant- observer". The researcher has trained in the Suzuki Method with Frank:AAPE and SP AC, performed with Frank:AAPE, and was the solo performer and collaborative developer in the performance Surfacing (directed by Leah Mercer). Observations of these three groups are based on a phenomenological definition of the "integrated actor", an actor who is able to achieve a totality or unity between the body and the mind, and between the body and the voice, through a powerful sense of intention. The term "integrated actor" has been informed by the philosophy of Merleau-Ponty and his concept of the "lived body". Three main hypotheses are presented in this study: that the Suzuki Method focuses on actors learning through their body; that the Suzuki Method presents an holistic approach to the body and the voice; and that the Suzuki Method develops actors with a strong sense of intention. These three aspects of the Suzuki Method are explored in relation to the stylistic features of the work of SPAC, Frank:AAPE and the performance Surfacing.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.