278 resultados para SQUARE RESONATORS
Resumo:
The unusual (1:1) complex ‘adduct’ salt of copper(II) with 4,5-dichlorophthalic acid (H2DCPA), having formula [Cu(H2O)4(C8H3Cl2O4) (C8H4Cl2O4)] . (C8H3Cl2O4) has been synthesized and characterized using single-crystal X-ray diffraction. Crystals are monoclinic, space group P21/c, with Z = 4 in a cell with dimensions a = 20.1376(7), b =12.8408(4) c = 12.1910(4) Å, β = 105.509(4)o. The complex is based on discrete tetragonally distorted octahedral [CuO6] coordination centres with the four water ligands occupying the square planar sites [Cu-O, 1.962(4)-1.987(4) Å] and the monodentate carboxyl-O donors of two DCPA ligand species in the axial sites. The first of these bonds [Cu-O, 2.341(4) Å] is with an oxygen of a HDCPA monoanion, the second with an oxygen of a H2DCPA acid species [Cu-O, 2.418(4) Å]. The un-coordinated ‘adduct’ molecule is a HDCPA counter anion which is strongly hydrogen-bonded to the coordinated H2DCPA ligand [O… O, 2.503(6) Å] while a number of peripheral intra- and intermolecular hydrogen-bonding interactions give a two-dimensional network structure.
Resumo:
YEAR: 2008 ROLE: Performer FORMAT: Live Art Event at Tiananmen Square Beijing, China (3 hours) and Later on Summit of Mt. Tai Shan, Shandong Province, China (6 hrs + 3 hrs). WITH: Solo WHAT: In the Hall of Reverence on Tiananmen Square, Beijing Mao Zedong's body lies in state surrounded by flowers and draped with a Red Flag of Communist China. His casket with a glass top lies on a black stone from Mt. Tai, reflecting the quotation from Sima Qian (China's Han Dynasty historian) that "One's life can be weightier than Mt. Tai or lighter than a goose feather". This pair of performances were a quiet, personal reflection upon what such a once revolutionary expression might mean in today's very different time and place. The work was conceived during the Olympic Cultural Festival showing of Intimate Transactions (www.intimatetransactions.com) - during the tumultuous times leading up to China's proudly staged August 2008 Olympics. The rise and rise of China had long been generating major geopolitical, ecological and cross-cultural shifts throughout the region and beyond. In this dramatic epicentre of change and at a time of such great national pride, how might we each act in ways that are ecologically 'mighty' and yet simultaneously have an impact lighter than a goosefeather? This is both a question for China in its relations with the autonomous provinces and the environment as it is for all of us in our own 'local' affairs. However ecologically speaking all that is of local concern is of global concern and noone can therefore be exempt from the need to sustain that which we share in common and must all protect for the future. Performance 1: Tiananmen Square, Beijing: Dropping 100 goose feathers. Performance 2: The summit of Mt Tai, Shandong Province. Building a mountain from Goose Feathers. SHOWING HISTORY: 1: Anniversary of Protest Crackdown, Jun 8th 2008. 2: Dawn on Tai Shan's summit, 15th June, 2008 DETAILS: Performance 1: Begin an hour after dawn (5.45am) in Tiananmen Square Bring pre-prepared performance shirt, a bag of goose feathers tipped with red. Begin at the "Gate of Heavenly Peace" under the image of Chairman Mao. Circumnavigate the world's largest open and the most surveilled public space 5 times dropping feathers periodically. Meditate on Forces of Change. Finally enter Chairman Mao's mausoleum with the masses and move quietly past his preserved body. End the performance at the Gate of Heavenly Peace 3 hours later. Performance 2: Walk up Mt. Tai Shan in silence meditating on Forces of Change (6 hours). Stay overnight on the summit. Begin an hour before dawn (3.45am) in silence. Bring performance shirt, a sack of goose feathers and a simple wooden structure. On the sunrise viewing side of the mountain build a miniature, fragile 'mountain' in goose feathers and sticks on the edge of a sheer precipice. Watch the sun rise as the feathers blow away into the valley deep below (3 hours).
Resumo:
This paper proposes a clustered approach for blind beamfoming from ad-hoc microphone arrays. In such arrangements, microphone placement is arbitrary and the speaker may be close to one, all or a subset of microphones at a given time. Practical issues with such a configuration mean that some microphones might be better discarded due to poor input signal to noise ratio (SNR) or undesirable spatial aliasing effects from large inter-element spacings when beamforming. Large inter-microphone spacings may also lead to inaccuracies in delay estimation during blind beamforming. In such situations, using a cluster of microphones (ie, a sub-array), closely located both to each other and to the desired speech source, may provide more robust enhancement than the full array. This paper proposes a method for blind clustering of microphones based on the magnitude square coherence function, and evaluates the method on a database recorded using various ad-hoc microphone arrangements.
Resumo:
The main contribution of this paper is decomposition/separation of the compositie induction motors load from measurement at a system bus. In power system transmission buses load is represented by static and dynamic loads. The induction motor is considered as the main dynamic loads and in the practice for major transmission buses there will be many and various induction motors contributing. Particularly at an industrial bus most of the load is dynamic types. Rather than traing to extract models of many machines this paper seeks to identify three groups of induction motors to represent the dynamic loads. Three groups of induction motors used to characterize the load. These are the small groups (4kw to 11kw), the medium groups (15kw to 180kw) and the large groups (above 630kw). At first these groups with different percentage contribution of each group is composite. After that from the composite models, each motor percentage contribution is decomposed by using the least square algorithms. In power system commercial and the residential buses static loads percentage is higher than the dynamic loads percentage. To apply this theory to other types of buses such as residential and commerical it is good practice to represent the total load as a combination of composite motor loads, constant impedence loads and constant power loads. To validate the theory, the 24hrs of Sydney West data is decomposed according to the three groups of motor models.
Resumo:
Purpose: The component modules in the standard BEAMnrc distribution may appear to be insufficient to model micro-multileaf collimators that have tri-faceted leaf ends and complex leaf profiles. This note indicates, however, that accurate Monte Carlo simulations of radiotherapy beams defined by a complex collimation device can be completed using BEAMnrc's standard VARMLC component module.---------- Methods: That this simple collimator model can produce spatially and dosimetrically accurate micro-collimated fields is illustrated using comparisons with ion chamber and film measurements of the dose deposited by square and irregular fields incident on planar, homogeneous water phantoms.---------- Results: Monte Carlo dose calculations for on- and off-axis fields are shown to produce good agreement with experimental values, even upon close examination of the penumbrae.--------- Conclusions: The use of a VARMLC model of the micro-multileaf collimator, along with a commissioned model of the associated linear accelerator, is therefore recommended as an alternative to the development or use of in-house or third-party component modules for simulating stereotactic radiotherapy and radiosurgery treatments. Simulation parameters for the VARMLC model are provided which should allow other researchers to adapt and use this model to study clinical stereotactic radiotherapy treatments.
Resumo:
Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.
Resumo:
On-axis monochromatic higher-order aberrations increase with age. Few studies have been made of peripheral refraction along the horizontal meridian of older eyes, and none of their off-axis higher-order aberrations. We measured wave aberrations over the central 42°x32° visual field for a 5mm pupil in 10 young and 7 older emmetropes. Patterns of peripheral refraction were similar in the two groups. Coma increased linearly with field angle at a significantly higher rate in older than in young emmetropes (−0.018±0.007 versus −0.006±0.002 µm/deg). Spherical aberration was almost constant over the measured field in both age groups and mean values across the field were significantly higher in older than in young emmetropes (+0.08±0.05 versus +0.02±0.04 µm). Total root-mean-square and higher-order aberrations increased more rapidly with field angle in the older emmetropes. However, the limits to monochromatic peripheral retinal image quality are largely determined by the second-order aberrations, which do not change markedly with age, and under normal conditions the relative importance of the increased higher-order aberrations in older eyes is lessened by the reduction in pupil diameter with age. Therefore it is unlikely that peripheral visual performance deficits observed in normal older individuals are primarily attributable to the increased impact of higher-order aberration.
Resumo:
what was silent will speak, what is closed will open and will take on a voice Paul Virilio The fundamental problem in dealing with the digital is that we are forced to contend with a fundamental deconstruction of form. A deconstruction that renders our content and practice into a single state that can be openly and easily manipulated, reimagined and mashed together in rapid time to create completely unique artefacts and potentially unwranglable jumbles of data. Once our work is essentially broken down into this series of number sequences, (or bytes), our sound, images, movies and documents – our memory files - we are left with nothing but choice….and this is the key concern. This absence of form transforms our work into new collections and poses unique challenges for the artist seeking opportunities to exploit the potential of digital deconstruction. It is through this struggle with the absent form that we are able to thoroughly explore the latent potential of content, exploit modern abstractions of time and devise approaches within our practice that actively deal with the digital as an essential matter of course.
Resumo:
The field was the curation of cross-cultural new media/ digital media practices within large-scale exhibition practices in China. The context was improved understandings of the intertwining of the natural and the artificial with respect to landscape and culture, and their consequent effect on our contemporary globalised society. The research highlighted new languages of media art with respect to landscape and their particular underpinning dialects. The methodology was principally practice-led. --------- The research brought together over 60 practitioners from both local and diasporic Asian, European and Australian cultures for the first time within a Chinese exhibition context. Through pursuing a strong response to both cultural displacement and re-identification the research forged and documented an enduring commonality within difference – an agenda further concentrated through sensitivities surrounding that year’s Beijing’s Olympics. In contrast to the severe threats posed to the local dialects of many of the world’s spoken and written languages the ‘Vernacular Terrain’ project evidenced that many local creative ‘dialects’ of the environment-media art continuum had indeed survived and flourished. --------- The project was co-funded by the Beijing Film Academy, QUT Precincts, IDAProjects and Platform China Art Institute. A broad range of peer-reviewed grants was won including from the Australia China Council and the Australian Embassy in China. Through invitations from external curators much of the work then traveled to other venues including the Block Gallery at QUT and the outdoor screens at Federation Square, Melbourne. The Vernacular Terrain catalogue featured a comprehensive history of the IDA project from 2000 to 2008 alongside several major essays. Due to the reputation IDA Projects had established, the team were invited to curate a major exhibition showcasing fifty new media artists: The Vernacular Terrain, at the prestigious Songzhang Art Museum, Beijing in Dec 07-Jan 2008. The exhibition was designed for an extensive, newly opened gallery owned by one of China's most important art historians Li Xian Ting. This exhibition was not only this gallery’s inaugural non-Chinese curated show but also the Gallery’s first new media exhibition. It included important works by artists such as Peter Greenway, Michael Roulier, Maleonn and Cui Xuiwen. --------- Each artist was chosen both for a focus upon their own local environmental concerns as well as their specific forms of practice - that included virtual world design, interactive design, video art, real time and manipulated multiplayer gaming platforms and web 2.0 practices. This exhibition examined the interconnectivities of cultural dialogue on both a micro and macro scale; incorporating the local and the global, through display methods and design approaches that stitched these diverse practices into a spatial map of meanings and conversations. By examining the contexts of each artist’s practice in relationship to the specificity of their own local place and prevailing global contexts the exhibition sought to uncover a global vernacular. Through pursuing this concentrated anthropological direction the research identified key themes and concerns of a contextual language that was clearly underpinned by distinctive local ‘dialects’ thereby contributing to a profound sense of cross-cultural association. Through augmentation of existing discourse the exhibition confirmed the enduring relevance and influence of both localized and globalised languages of the landscape-technology continuum.
Resumo:
Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Niño/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Niño/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 3–12 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (β=−0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.
Resumo:
Aim: To measure the influence of spherical intraocular lens implantation and conventional myopic laser in situ keratomileusis on peripheral ocular aberrations. Setting: Visual & Ophthalmic Optics Laboratory, School of Optometry & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia. Methods: Peripheral aberrations were measured using a modified commercial Hartmann-Shack aberrometer across 42° x 32° of the central visual field in 6 subjects after spherical intraocular lens (IOL) implantation and in 6 subjects after conventional laser in situ keratomileusis (LASIK) for myopia. The results were compared with those of age matched emmetropic and myopic control groups. Results: The IOL group showed a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, and greater rates of change of higher-order root-mean-square aberrations and total root-mean-square aberrations across the visual field than its emmetropic control group. However, coma trends were similar for the two groups. The LASIK group had a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, the opposite trend in coma across the field, and greater higher-order root-mean-square aberrations and total root-mean-square aberrations than its myopic control group. Conclusion: Spherical IOL implantation and conventional myopia LASIK increase ocular peripheral aberrations. They cause considerable increase in spherical aberration across the visual field. LASIK reverses the sign of the rate of change in coma across the field relative to that of the other groups. Keywords: refractive surgery, LASIK, IOL implantation, aberrations, peripheral aberrations
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.