983 resultados para frequency features
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
The studies in the thesis were derived from a program of research focused on centre-based child care in Australia. The studies constituted an ecological analysis as they examined proximal and distal factors which have the potential to affect children's developmental opportunities (Bronfenbrenner, 1979). The project was conducted in thirty-two child care centres located in south-east Queensland. Participants in the research included staff members at the centres, families using the centres and their children. The first study described the personal and professional characteristics of one hundred and forty-four child care workers, as well as their job satisfaction and job commitment. Factors impinging on the stability of care afforded to children were examined, specifically child care workers' intentions to leave their current position and actual staff turnover at a twelve month follow-up. This is an ecosystem analysis (Bronfenbrenner & Crouter, 1983), as it examined the world of work for carers; a setting not directly involving the developing child, but which has implications for children's experiences. Staff job satisfaction was focused on working with children and other adults, including parents and colleagues. Involvement with children was reported as being the most rewarding aspect of the work. This intrinsic satisfaction was enough to sustain caregivers' efforts to maintain their employment in child care programs. It was found that, while improving working conditions may help to reduce turnover, it is likely that moderate turnover rates will remain as child care staff work in relatively small centres and they leave in order to improve career prospects. Departure from a child care job appeared to be as much about improving career opportunities or changing personal circumstances, as it was about poor wages and working conditions. In the second study, factors that influence maternal satisfaction with child care arrangements were examined. The focus included examination of the nature and qualities of parental interaction with staff. This was a mesosystem analysis (Bronfenbrenner & Crouter, 1983), as it considered the links between family and child care settings. Two hundred and twenty-two questionnaires were returned from mothers whose children were enrolled in the participating centres. It was found that maternal satisfaction with child care encompassed the domains of child-centred and parent-centred satisfaction. The nature and range of responses in the quantitative and qualitative data indicated that these parents were genuinely satisfied with their children's care. In the prediction of maternal satisfaction with child care, single parents, mothers with high role satisfaction, and mothers who were satisfied with the frequency of staff contact and degree of supportive communication had higher levels of satisfaction with their child care arrangements. The third study described the structural and process variations within child care programs and examined program differences for compliance with regulations and differences by profit status of the centre, as a microsystem analysis (Bronfenbrenner, 1979). Observations were made in eighty-three programs which served children from two to five years. The results of the study affirmed beliefs that nonprofit centres are superior in the quality of care provided, although this was not to a level which meant that the care in for-profit centres was inadequate. Regulation of structural features of child care programs, per se, did not guarantee higher quality child care as measured by global or process indicators. The final study represented an integration of a range of influences in child care and family settings which may impact on development. Features of child care programs which predict children's social and cognitive development, while taking into account child and family characteristics, were identified. Results were consistent with other research findings which show that child and family characteristics and child care quality predict children's development. Child care quality was more important to the prediction of social development, while family factors appeared to be more predictive of cognitive/language development. An influential variable predictive of development was the period of time which the child had been in the centre. This highlighted the importance of the stability of child care arrangements. Child care quality features which had most influence were global ratings of the qualities of the program environment. However, results need to be interpreted cautiously as the explained variance in the predictive models developed was low. The results of these studies are discussed in terms of the implications for practice and future research. Considerations for an expanded view of ecological approaches to child care research are outlined. Issues discussed include the need to generate child care research which is relevant to social policy development, the implications of market driven policies for child care services, professionalism and professionalisation of child care work, and the need to reconceptualise child care research when the goal is to develop greater theoretical understanding about child care environments and developmental processes.
Resumo:
Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.
Resumo:
Diarrhoea is one of the leading causes of morbidity and mortality in populations in developing countries and is a significant health issue throughout the world. Despite the frequency and the severity of the diarrhoeal disease, mechanisms of pathogenesis for many of the causative agents have been poorly characterised. Although implicated in a number of intestinal and extra-intestinal infections in humans, Plesiomonas shigelloides generally has been dismissed as an enteropathogen due to the lack of clearly demonstrated virulence-associated properties such as production of cytotoxins and enterotoxins or invasive abilities. However, evidence from a number of sources has indicated that this species may be the cause of a number of clinical infections. The work described in this thesis seeks to resolve this discrepancy by investigating the pathogenic potential of P. shigelloides using in vitro cell models. The focus of this research centres on how this organism interacts with human host cells in an experimental model. Very little is known about the pathogenic potential of P. shigel/oides and its mechanisms in human infections and disease. However, disease manifestations mimic those of other related microorganisms. Chapter 2 reviews microbial pathogenesis in general, with an emphasis on understanding the mechanisms resulting from infection with bacterial pathogens and the alterations in host cell biology. In addition, this review analyses the pathogenic status of a poorly-defined enteropathogen, P. shigelloides. Key stages of pathogenicity must occur in order for a bacterial pathogen to cause disease. Such stages include bacterial adherence to host tissue, bacterial entry into host tissues (usually required), multiplication within host tissues, evasion of host defence mechanisms and the causation of damage. In this study, these key strategies in infection and disease were sought to help assess the pathogenic potential of P. shigelloides (Chapter 3). Twelve isolates of P. shigelloides, obtained from clinical cases of gastroenteritis, were used to infect monolayers of human intestinal epithelial cells in vitro. Ultrastructural analysis demonstrated that P. shigelloides was able to adhere to the microvilli at the apical surface of the epithelial cells and also to the plasma membranes of both apical and basal surfaces. Furthermore, it was demonstrated that these isolates were able to enter intestinal epithelial cells. Internalised bacteria often were confined within vacuoles surrounded by single or multiple membranes. Observation of bacteria within membranebound vacuoles suggests that uptake of P. shigelloides into intestinal epithelial cells occurs via a process morphologically comparable to phagocytosis. Bacterial cells also were observed free in the host cell cytoplasm, indicating that P. shige/loides is able to escape from the surrounding vacuolar membrane and exist within the cytosol of the host. Plesiomonas shigelloides has not only been implicated in gastrointestinal infections, but also in a range of non-intestinal infections such as cholecystitis, proctitis, septicaemia and meningitis. The mechanisms by which P. shigelloides causes these infections are not understood. Previous research was unable to ascertain the pathogenic potential of P. shigel/oides using cells of non-intestinal origin (HEp-2 cells derived from a human larynx carcinoma and Hela cells derived from a cervical carcinoma). However, with the recent findings (from this study) that P. shigelloides can adhere to and enter intestinal cells, it was hypothesised, that P. shigel/oides would be able to enter Hela and HEp-2 cells. Six clinical isolates of P. shigelloides, which previously have been shown to be invasive to intestinally derived Caco-2 cells (Chapter 3) were used to study interactions with Hela and HEp-2 cells (Chapter 4). These isolates were shown to adhere to and enter both nonintestinal host cell lines. Plesiomonas shigelloides were observed within vacuoles surrounded by single and multiple membranes, as well as free in the host cell cytosol, similar to infection by P. shigelloides of Caco-2 cells. Comparisons of the number of bacteria adhered to and present intracellularly within Hela, HEp-2 and Caco-2 cells revealed a preference of P. shigelloides for Caco-2 cells. This study conclusively showed for the first time that P. shigelloides is able to enter HEp-2 and Hela cells, demonstrating the potential ability to cause an infection and/or disease of extra-intestinal sites in humans. Further high resolution ultrastructural analysis of the mechanisms involved in P. shigelloides adherence to intestinal epithelial cells (Chapter 5) revealed numerous prominent surface features which appeared to be involved in the binding of P. shige/loides to host cells. These surface structures varied in morphology from small bumps across the bacterial cell surface to much longer filaments. Evidence that flagella might play a role in bacterial adherence also was found. The hypothesis that filamentous appendages are morphologically expressed when in contact with host cells also was tested. Observations of bacteria free in the host cell cytosol suggests that P. shigelloides is able to lyse free from the initial vacuolar compartment. The vacuoles containing P. shigel/oides within host cells have not been characterised and the point at which P. shigelloides escapes from the surrounding vacuolar compartment has not been determined. A cytochemical detection assay for acid phosphatase, an enzymatic marker for lysosomes, was used to analyse the co-localisation of bacteria-containing vacuoles and acid phosphatase activity (Chapter 6). Acid phosphatase activity was not detected in these bacteria-containing vacuoles. However, the surface of many intracellular and extracellular bacteria demonstrated high levels of acid phosphatase activity, leading to the proposal of a new virulence factor for P. shigelloides. For many pathogens, the efficiency with which they adhere to and enter host cells is dependant upon the bacterial phase of growth. Such dependency reflects the timing of expression of particular virulence factors important for bacterial pathogenesis. In previous studies (Chapter 3 to Chapter 6), an overnight culture of P. shigelloides was used to investigate a number of interactions, however, it was unknown whether this allowed expression of bacterial factors to permit efficient P. shigelloides attachment and entry into human cells. In this study (Chapter 7), a number of clinical and environmental P. shigelloides isolates were investigated to determine whether adherence and entry into host cells in vitro was more efficient during exponential-phase or stationary-phase bacterial growth. An increase in the number of adherent and intracellular bacteria was demonstrated when bacteria were inoculated into host cell cultures in exponential phase cultures. This was demonstrated clearly for 3 out of 4 isolates examined. In addition, an increase in the morphological expression of filamentous appendages, a suggested virulence factor for P. shigel/oides, was observed for bacteria in exponential growth phase. These observations suggest that virulence determinants for P. shigel/oides may be more efficiently expressed when bacteria are in exponential growth phase. This study demonstrated also, for the first time, that environmental water isolates of P. shigelloides were able to adhere to and enter human intestinal cells in vitro. These isolates were seen to enter Caco-2 host cells through a process comparable to the clinical isolates examined. These findings support the hypothesis of a water transmission route for P. shigelloides infections. The results presented in this thesis contribute significantly to our understanding of the pathogenic mechanisms involved in P. shigelloides infections and disease. Several of the factors involved in P. shigelloides pathogenesis have homologues in other pathogens of the human intestine, namely Vibrio, Aeromonas, Salmonella, Shigella species and diarrhoeaassociated strains of Escherichia coli. This study emphasises the relevance of research into Plesiomonas as a means of furthering our understanding of bacterial virulence in general. As well it provides tantalising clues on normal and pathogenic host cell mechanisms.
Resumo:
In this paper, the commonly used switching schemes for sliding mode control of power converters is analyzed and designed in the frequency domain. Particular application of a distribution static compensator (DSTATCOM) in voltage control mode is investigated in a power distribution system. Tsypkin's method and describing function is used to obtain the switching conditions for the two-level and three-level voltage source inverters. Magnitude conditions of carrier signals are developed for robust switching of the inverter under carrier-based modulation scheme of sliding mode control. The existence of border collision bifurcation is identified to avoid the complex switching states of the inverter. The load bus voltage of an unbalanced three-phase nonstiff radial distribution system is controlled using the proposed carrier-based design. The results are validated using PSCAD/EMTDC simulation studies and through a scaled laboratory model of DSTATCOM that is developed for experimental verification
Resumo:
Purpose: The aim was to construct and advise on the use of a cost-per-wear model based on contact lens replacement frequency, to form an equitable basis for cost comparison. ---------- Methods: The annual cost of professional fees, contact lenses and solutions when wearing daily, two-weekly and monthly replacement contact lenses is determined in the context of the Australian market for spherical, toric and multifocal prescription types. This annual cost is divided by the number of times lenses are worn per year, resulting in a ‘cost-per-wear’. The model is presented graphically as the cost-per-wear versus the number of times lenses are worn each week for daily replacement and reusable (two-weekly and monthly replacement) lenses.---------- Results: The cost-per-wear for two-weekly and monthly replacement spherical lenses is almost identical but decreases with increasing frequency of wear. The cost-per-wear of daily replacement spherical lenses is lower than for reusable spherical lenses, when worn from one to four days per week but higher when worn six or seven days per week. The point at which the cost-per-wear is virtually the same for all three spherical lens replacement frequencies (approximately AUD$3.00) is five days of lens wear per week. A similar but upwardly displaced (higher cost) pattern is observed for toric lenses, with the cross-over point occurring between three and four days of wear per week (AUD$4.80). Multifocal lenses have the highest price, with cross-over points for daily versus two-weekly replacement lenses at between four and five days of wear per week (AUD$5.00) and for daily versus monthly replacement lenses at three days per week (AUD$5.50).---------- Conclusions: This cost-per-wear model can be used to assist practitioners and patients in making an informed decision in relation to the cost of contact lens wear as one of many considerations that must be taken into account when deciding on the most suitable lens replacement modality.