119 resultados para Group theoretical based techniques
Resumo:
Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.
Resumo:
Enterococci are versatile Gram-positive bacteria that can survive under extreme conditions. Most enterococci are non-virulent and found in the gastrointestinal tract of humans and animals. Other strains are opportunistic pathogens that contribute to a large number of nosocomial infections globally. Epidemiological studies demonstrated a direct relationship between the density of enterococci in surface waters and the risk of swimmer-associated gastroenteritis. The distribution of infectious enterococcal strains from the hospital environment or other sources to environmental water bodies through sewage discharge or other means, could increase the prevalence of these strains in the human population. Environmental water quality studies may benefit from focusing on a subset of Enterococcus spp. that are consistently associated with sources of faecal pollution such as domestic sewage, rather than testing for the entire genus. E. faecalis and E. faecium are potentially good focal species for such studies, as they have been consistently identified as the dominant Enterococcus spp. in human faeces and sewage. On the other hand enterococcal infections are predominantly caused by E. faecalis and E. faecium. The characterisation of E. faecalis and E. faecium is important in studying their population structures, particularly in environmental samples. In developing and implementing rapid, robust molecular genotyping techniques, it is possible to more accurately establish the relationship between human and environmental enterococci. Of particular importance, is to determine the distribution of high risk enterococcal clonal complexes, such as E. faecium clonal complex 17 and E. faecalis clonal complexes 2 and 9 in recreational waters. These clonal complexes are recognized as particularly pathogenic enterococcal genotypes that cause severe disease in humans globally. The Pimpama-Coomera watershed is located in South East Queensland, Australia and was investigated in this study mainly because it is used intensively for agriculture and recreational purposes and has a strong anthropogenic impact. The primary aim of this study was to develop novel, universally applicable, robust, rapid and cost effective genotyping methods which are likely to yield more definitive results for the routine monitoring of E. faecalis and E. faecium, particularly in environmental water sources. To fullfill this aim, new genotyping methods were developed based on the interrogation of highly informative single nucleotide polymorphisms (SNPs) located in housekeeping genes of both E. faecalis and E. faecium. SNP genotyping was successfully applied in field investigations of the Coomera watershed, South-East Queensland, Australia. E. faecalis and E. faecium isolates were grouped into 29 and 23 SNP profiles respectively. This study showed the high longitudinal diversity of E. faecalis and E. faecium over a period of two years, and both human-related and human-specific SNP profiles were identified. Furthermore, 4.25% of E. faecium strains isolated from water was found to correspond to the important clonal complex-17 (CC17). Strains that belong to CC17 cause the majority of hospital outbreaks and clinical infections globally. Of the six sampling sites of the Coomera River, Paradise Point had the highest number of human-related and human-specific E. faecalis and E. faecium SNP profiles. The secondary aim of this study was to determine the antibiotic-resistance profiles and virulence traits associated with environmental E. faecalis and E. faecium isolates compared to human pathogenic E. faecalis and E. faecium isolates. This was performed to predict the potential health risks associated with coming into contact with these strains in the Coomera watershed. In general, clinical isolates were found to be more resistant to all the antibiotics tested compared to water isolates and they harbored more virulence traits. Multi-drug resistance was more prevalent in clinical isolates (71.18% of E. faecalis and 70.3 % of E. faecium) compared to water isolates (only 5.66 % E. faecium). However, tetracycline, gentamicin, ciprofloxacin and ampicillin resistance was observed in water isolates. The virulence gene esp was the most prevalent virulence determinant observed in clinical isolates (67.79% of E. faecalis and 70.37 % of E. faecium), and this gene has been described as a human-specific marker used for microbial source tracking (MST). The presence of esp in water isolates (16.36% of E. faecalis and 19.14% of E. faecium) could be indicative of human faecal contamination in these waterways. Finally, in order to compare overall gene expression between environmental and clinical strains of E. faecalis, a comparative gene hybridization study was performed. The results of this investigation clearly demonstrated the up-regulation of genes associated with pathogenicity in E. faecalis isolated from water. The expression study was performed at physiological temperatures relative to ambient temperatures. The up-regulation of virulence genes demonstrates that environmental strains of E. faecalis can pose an increased health risk which can lead to serious disease, particularly if these strains belong to the virulent CC17 group. The genotyping techniques developed in this study not only provide a rapid, robust and highly discriminatory tool to characterize E. faecalis and E. faecium, but also enables the efficient identification of virulent enterococci that are distributed in environmental water sources.
Resumo:
CTAC2012 was the 16th biennial Computational Techniques and Applications Conference, and took place at Queensland University of Technology from 23 - 26 September, 2012. The ANZIAM Special Interest Group in Computational Techniques and Applications is responsible for the CTAC meetings, the first of which was held in 1981.
Resumo:
With the explosive growth of resources available through the Internet, information mismatching and overload have become a severe concern to users. Web users are commonly overwhelmed by huge volume of information and are faced with the challenge of finding the most relevant and reliable information in a timely manner. Personalised information gathering and recommender systems represent state-of-the-art tools for efficient selection of the most relevant and reliable information resources, and the interest in such systems has increased dramatically over the last few years. However, web personalization has not yet been well-exploited; difficulties arise while selecting resources through recommender systems from a technological and social perspective. Aiming to promote high quality research in order to overcome these challenges, this paper provides a comprehensive survey on the recent work and achievements in the areas of personalised web information gathering and recommender systems. The report covers concept-based techniques exploited in personalised information gathering and recommender systems.
Resumo:
In this paper, we propose a semi-supervised approach of anomaly detection in Online Social Networks. The social network is modeled as a graph and its features are extracted to detect anomaly. A clustering algorithm is then used to group users based on these features and fuzzy logic is applied to assign degree of anomalous behavior to the users of these clusters. Empirical analysis shows effectiveness of this method.
Resumo:
Nanowires (NWs) have attracted appealing and broad application owing to their remarkable mechanical, optical, electrical, thermal and other properties. To unlock the revolutionary characteristics of NWs, a considerable body of experimental and theoretical work has been conducted. However, due to the extremely small dimensions of NWs, the application and manipulation of the in situ experiments involve inherent complexities and huge challenges. For the same reason, the presence of defects appears as one of the most dominant factors in determining their properties. Hence, based on the experiments' deficiency and the necessity of investigating different defects' influence, the numerical simulation or modelling becomes increasingly important in the area of characterizing the properties of NWs. It has been noted that, despite the number of numerical studies of NWs, significant work still lies ahead in terms of problem formulation, interpretation of results, identification and delineation of deformation mechanisms, and constitutive characterization of behaviour. Therefore, the primary aim of this study was to characterize both perfect and defected metal NWs. Large-scale molecular dynamics (MD) simulations were utilized to assess the mechanical properties and deformation mechanisms of different NWs under diverse loading conditions including tension, compression, bending, vibration and torsion. The target samples include different FCC metal NWs (e.g., Cu, Ag, Au NWs), which were either in a perfect crystal structure or constructed with different defects (e.g. pre-existing surface/internal defects, grain/twin boundaries). It has been found from the tensile deformation that Young's modulus was insensitive to different styles of pre-existing defects, whereas the yield strength showed considerable reduction. The deformation mechanisms were found to be greatly influenced by the presence of defects, i.e., different defects acted in the role of dislocation sources, and many affluent deformation mechanisms had been triggered. Similar conclusions were also obtained from the compressive deformation, i.e., Young's modulus was insensitive to different defects, but the critical stress showed evident reduction. Results from the bending deformation revealed that the current modified beam models with the considerations of surface effect, or both surface effect and axial extension effect were still experiencing certain inaccuracy, especially for the NW with ultra small cross-sectional size. Additionally, the flexural rigidity of the NW was found to be insensitive to different pre-existing defects, while the yield strength showed an evident decrease. For the resonance study, the first-order natural frequency of the NW with pre-existing surface defects was almost the same as that from the perfect NW, whereas a lower first-order natural frequency and a significantly degraded quality factor was observed for NWs with grain boundaries. Most importantly, the <110> FCC NWs were found to exhibit a novel beat phenomenon driven by a single actuation, which was resulted from the asymmetry in the lattice spacing in the (110) plane of the NW cross-section, and expected to exert crucial impacts on the in situ nanomechanical measurements. In particular, <110> Ag NWs with rhombic, truncated rhombic, and triangular cross-sections were found to naturally possess two first-mode natural frequencies, which were envisioned with applications in NEMS that could operate in a non-planar regime. The torsion results revealed that the torsional rigidity of the NW was insensitive to the presence of pre-existing defects and twin boundaries, but received evident reduction due to grain boundaries. Meanwhile, the critical angle decreased considerably for defected NWs. This study has provided a comprehensive and deep investigation on the mechanical properties and deformation mechanisms of perfect and defected NWs, which will greatly extend and enhance the existing knowledge and understanding of the properties/performance of NWs, and eventually benefit the realization of their full potential applications. All delineated MD models and theoretical analysis techniques that were established for the target NWs in this research are also applicable to future studies on other kinds of NWs. It has been suggested that MD simulation is an effective and excellent tool, not only for the characterization of the properties of NWs, but also for the prediction of novel or unexpected properties.
Resumo:
The coupling of kurtosis based-indexes and envelope analysis represents one of the most successful and widespread procedures for the diagnostics of incipient faults on rolling element bearings. Kurtosis-based indexes are often used to select the proper demodulation band for the application of envelope-based techniques. Kurtosis itself, in slightly different formulations, is applied for the prognostic and condition monitoring of rolling element bearings, as a standalone tool for a fast indication of the development of faults. This paper shows for the first time the strong analytical connection which holds for these two families of indexes. In particular, analytical identities are shown for the squared envelope spectrum (SES) and the kurtosis of the corresponding band-pass filtered analytic signal. In particular, it is demonstrated how the sum of the peaks in the SES corresponds to the raw 4th order moment. The analytical results show as well a link with an another signal processing technique: the cepstrum pre-whitening, recently used in bearing diagnostics. The analytical results are the basis for the discussion on an optimal indicator for the choice of the demodulation band, the ratio of cyclic content (RCC), which endows the kurtosis with selectivity in the cyclic frequency domain and whose performance is compared with more traditional kurtosis-based indicators such as the protrugram. A benchmark, performed on numerical simulations and experimental data coming from two different test-rigs, proves the superior effectiveness of such an indicator. Finally a short introduction to the potential offered by the newly proposed index in the field of prognostics is given in an additional experimental example. In particular the RCC is tested on experimental data collected on an endurance bearing test-rig, showing its ability to follow the development of the damage with a single numerical index.
Resumo:
Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. In order to judge on compliance of the business processing, the degree of behavioural deviation of a case, i.e., an observed execution sequence, is quantified with respect to a process model (referred to as fitness, or recall). Recently, different compliance measures have been proposed. Still, nearly all of them are grounded on state-based techniques and the trace equivalence criterion, in particular. As a consequence, these approaches have to deal with the state explosion problem. In this paper, we argue that a behavioural abstraction may be leveraged to measure the compliance of a process log – a collection of cases. To this end, we utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently. We propose different compliance measures based on these profiles, discuss the impact of noise in process logs on our measures, and show how diagnostic information on non-compliance is derived. As a validation, we report on findings of applying our approach in a case study with an international service provider.
Resumo:
Development of technologies for water desalination and purification is critical to meet the global challenges of insufficient water supply and inadequate sanitation, especially for point-of-use applications. Conventional desalination methods are energy and operationally intensive, whereas adsorption-based techniques are simple and easy to use for point-of-use water purification, yet their capacity to remove salts is limited. Here we report that plasma-modified ultralong carbon nanotubes exhibit ultrahigh specific adsorption capacity for salt (exceeding 400% by weight) that is two orders of magnitude higher than that found in the current state-of-the-art activated carbon-based water treatment systems. We exploit this adsorption capacity in ultralong carbon nanotube-based membranes that can remove salt, as well as organic and metal contaminants. These ultralong carbon nanotube-based membranes may lead to next-generation rechargeable, point-of-use potable water purification appliances with superior desalination, disinfection and filtration properties. © 2013 Macmillan Publishers Limited.
Resumo:
This paper discusses the use of observational video recordings to document young children’s use of technology in their homes. Although observational research practices have been used for decades, often with video-based techniques, the participant group in this study (i.e., very young children) and the setting (i.e., private homes), provide a rich space for exploring the benefits and limitations of qualitative observation. The data gathered in this study point to a number of key decisions and issues that researchers must face in designing observational research, particularly where non-researchers (in this case, parents) act as surrogates for the researcher at the data collection stage. The involvement of parents and children as research videographers in the home resulted in very rich and detailed data about children’s use of technology in their daily lives. However, limitations noted in the dataset (e.g., image quality) provide important guidance for researchers developing projects using similar methods in future. The paper provides recommendations for future observational designs in similar settings and/or with similar participant groups.
Resumo:
The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.
Resumo:
Applied Theatre is an umbrella term for a range of drama-based techniques, all of which align with a lineage of pedagogical theory and practice: (e.g.) Freire, Moreno, Heathcote. It encompasses methods and forms including Drama Education (O’Neill); Forum Theatre (Boal); and Process Drama (Haseman, O’Toole). Applied theatre often occurs in non-theatrical settings (schools, hospitals, prisons) with the aim of helping participants address issues of local concern. Increasingly, Applied Theatre practices are utilised in the corporate environment. Appied Theatre adopts artistic principles in production, but posits a practical utility beyond simple entertainment.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
AC motors are largely used in a wide range of modern systems, from household appliances to automated industry applications such as: ventilations systems, fans, pumps, conveyors and machine tool drives. Inverters are widely used in industrial and commercial applications due to the growing need for speed control in ASD systems. Fast switching transients and the common mode voltage, in interaction with parasitic capacitive couplings, may cause many unwanted problems in the ASD applications. These include shaft voltage and leakage currents. One of the inherent characteristics of Pulse Width Modulation (PWM) techniques is the generation of the common mode voltage, which is defined as the voltage between the electrical neutral of the inverter output and the ground. Shaft voltage can cause bearing currents when it exceeds the amount of breakdown voltage level of the thin lubricant film between the inner and outer rings of the bearing. This phenomenon is the main reason for early bearing failures. A rapid development in power switches technology has lead to a drastic decrement of switching rise and fall times. Because there is considerable capacitance between the stator windings and the frame, there can be a significant capacitive current (ground current escaping to earth through stray capacitors inside a motor) if the common mode voltage has high frequency components. This current leads to noises and Electromagnetic Interferences (EMI) issues in motor drive systems. These problems have been dealt with using a variety of methods which have been reported in the literature. However, cost and maintenance issues have prevented these methods from being widely accepted. Extra cost or rating of the inverter switches is usually the price to pay for such approaches. Thus, the determination of cost-effective techniques for shaft and common mode voltage reduction in ASD systems, with the focus on the first step of the design process, is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. Electrical power generation from renewable energy sources, such as wind energy systems, has become a crucial issue because of environmental problems and a predicted future shortage of traditional energy sources. Thus, Chapter 2 focuses on the shaft voltage analysis of stator-fed induction generators (IG) and Doubly Fed Induction Generators DFIGs in wind turbine applications. This shaft voltage analysis includes: topologies, high frequency modelling, calculation and mitigation techniques. A back-to-back AC-DC-AC converter is investigated in terms of shaft voltage generation in a DFIG. Different topologies of LC filter placement are analysed in an effort to eliminate the shaft voltage. Different capacitive couplings exist in the motor/generator structure and any change in design parameters affects the capacitive couplings. Thus, an appropriate design for AC motors should lead to the smallest possible shaft voltage. Calculation of the shaft voltage based on different capacitive couplings, and an investigation of the effects of different design parameters are discussed in Chapter 3. This is achieved through 2-D and 3-D finite element simulation and experimental analysis. End-winding parameters of the motor are also effective factors in the calculation of the shaft voltage and have not been taken into account in previous reported studies. Calculation of the end-winding capacitances is rather complex because of the diversity of end winding shapes and the complexity of their geometry. A comprehensive analysis of these capacitances has been carried out with 3-D finite element simulations and experimental studies to determine their effective design parameters. These are documented in Chapter 4. Results of this analysis show that, by choosing appropriate design parameters, it is possible to decrease the shaft voltage and resultant bearing current in the primary stage of generator/motor design without using any additional active and passive filter-based techniques. The common mode voltage is defined by a switching pattern and, by using the appropriate pattern; the common mode voltage level can be controlled. Therefore, any PWM pattern which eliminates or minimizes the common mode voltage will be an effective shaft voltage reduction technique. Thus, common mode voltage reduction of a three-phase AC motor supplied with a single-phase diode rectifier is the focus of Chapter 5. The proposed strategy is mainly based on proper utilization of the zero vectors. Multilevel inverters are also used in ASD systems which have more voltage levels and switching states, and can provide more possibilities to reduce common mode voltage. A description of common mode voltage of multilevel inverters is investigated in Chapter 6. Chapter 7 investigates the elimination techniques of the shaft voltage in a DFIG based on the methods presented in the literature by the use of simulation results. However, it could be shown that every solution to reduce the shaft voltage in DFIG systems has its own characteristics, and these have to be taken into account in determining the most effective strategy. Calculation of the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions is discussed in Chapter 8. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.
Resumo:
Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.