877 resultados para frequency domain phase conjugation
Resumo:
This paper describes a method of signal preprocessing under active monitoring. Suppose we want to solve the inverse problem of getting the response of a medium to one powerful signal, which is equivalent to obtaining the transmission function of the medium, but do not have an opportunity to conduct such an experiment (it might be too expensive or harmful for the environment). Practically the problem can be reduced to obtaining the transmission function of the medium. In this case we can conduct a series of experiments of relatively low power and superpose the response signals. However, this method is conjugated with considerable loss of information (especially in the high frequency domain) due to fluctuations of the phase, the frequency and the starting time of each individual experiment. The preprocessing technique presented in this paper allows us to substantially restore the response of the medium and consequently to find a better estimate for the transmission function. This technique is based on expanding the initial signal into the system of orthogonal functions.
Resumo:
We extend the theory of parametric noise amplification to the case of transmission systems employing multiple optical phase conjugators, demonstrating that the excess noise due to this process may be reduced in direct proportion to the number of phase conjugation devices employed. We further identify that the optimum noise suppression is achieved for an odd number of phase conjugators, and that the noise may be further suppressed by up to 3dB by partial digital back propagation (or fractional spans at the ends of the links).
Resumo:
A high frequency physical phase variable electric machine model was developed using FE analysis. The model was implemented in a machine drive environment with hardware-in-the-loop. The novelty of the proposed model is that it is derived based on the actual geometrical and other physical information of the motor, considering each individual turn in the winding. This is the first attempt to develop such a model to obtain high frequency machine parameters without resorting to expensive experimental procedures currently in use. The model was used in a dynamic simulation environment to predict inverter-motor interaction. This includes motor terminal overvoltage, current spikes, as well as switching effects. In addition, a complete drive model was developed for electromagnetic interference (EMI) analysis and evaluation. This consists of the lumped parameter models of different system components, such as cable, inverter, and motor. The lumped parameter models enable faster simulations. The results obtained were verified by experimental measurements and excellent agreements were obtained. A change in the winding arrangement and its influence on the motor high frequency behavior has also been investigated. This was shown to have a little effect on the parameter values and in the motor high frequency behavior for equal number of turns. An accurate prediction of overvoltage and EMI in the design stages of the drive system would reduce the time required for the design modifications as well as for the evaluation of EMC compliance issues. The model can be utilized in the design optimization and insulation selection for motors. Use of this procedure could prove economical, as it would help designers develop and test new motor designs for the evaluation of operational impacts in various motor drive applications.
Resumo:
This dissertation proposed a new approach to seizure detection in intracranial EEG recordings using nonlinear decision functions. It implemented well-established features that were designed to deal with complex signals such as brain recordings, and proposed a 2-D domain of analysis. Since the features considered assume both the time and frequency domains, the analysis was carried out both temporally and as a function of different frequency ranges in order to ascertain those measures that were most suitable for seizure detection. In retrospect, this study established a generalized approach to seizure detection that works across several features and across patients. ^ Clinical experiments involved 8 patients with intractable seizures that were evaluated for potential surgical interventions. A total of 35 iEEG data files collected were used in a training phase to ascertain the reliability of the formulated features. The remaining 69 iEEG data files were then used in the testing phase. ^ The testing phase revealed that the correlation sum is the feature that performed best across all patients with a sensitivity of 92% and an accuracy of 99%. The second best feature was the gamma power with a sensitivity of 92% and an accuracy of 96%. In the frequency domain, all of the 5 other spectral bands considered, revealed mixed results in terms of low sensitivity in some frequency bands and low accuracy in other frequency bands, which is expected given that the dominant frequencies in iEEG are those of the gamma band. In the time domain, other features which included mobility, complexity, and activity, all performed very well with an average a sensitivity of 80.3% and an accuracy of 95%. ^ The computational requirement needed for these nonlinear decision functions to be generated in the training phase was extremely long. It was determined that when the duration dimension was rescaled, the results improved and the convergence rates of the nonlinear decision functions were reduced dramatically by more than a 100 fold. Through this rescaling, the sensitivity of the correlation sum improved to 100% and the sensitivity of the gamma power to 97%, which meant that there were even less false negatives and false positives detected. ^
Resumo:
A two-dimensional, 2D, finite-difference time-domain (FDTD) method is used to analyze two different models of multi-conductor transmission lines (MTL). The first model is a two-conductor MTL and the second is a threeconductor MTL. Apart from the MTL's, a three-dimensional, 3D, FDTD method is used to analyze a three-patch microstrip parasitic array. While the MTL analysis is entirely in time-domain, the microstrip parasitic array is a study of scattering parameter Sn in the frequency-domain. The results clearly indicate that FDTD is an efficient and accurate tool to model and analyze multiconductor transmission line as well as microstrip antennas and arrays.
Resumo:
In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.
In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.
The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.
This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.
The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.
The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.
EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.
Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.
The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.
This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.
The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.
Resumo:
SCHEFFZUK, C. , KUKUSHKA, V. , VYSSOTSKI, A. L. , DRAGUHN, A. , TORT, A. B. L. , BRANKACK, J. . Global slowing of network oscillations in mouse neocortex by diazepam. Neuropharmacology , v. 65, p. 123-133, 2013.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
SCHEFFZUK, C. , KUKUSHKA, V. , VYSSOTSKI, A. L. , DRAGUHN, A. , TORT, A. B. L. , BRANKACK, J. . Global slowing of network oscillations in mouse neocortex by diazepam. Neuropharmacology , v. 65, p. 123-133, 2013.
Resumo:
The wave energy industry is entering a new phase of pre-commercial and commercial deployments of full-scale devices, so better understanding of seaway variability is critical to the successful operation of devices. The response of Wave Energy Converters to incident waves govern their operational performance and for many devices, this is highly dependent on spectral shape due to their resonant properties. Various methods of wave measurement are presented, along with analysis techniques and empirical models. Resource assessments, device performance predictions and monitoring of operational devices will often be based on summary statistics and assume a standard spectral shape such as Pierson-Moskowitz or JONSWAP. Furthermore, these are typically derived from the closest available wave data, frequently separated from the site on scales in the order of 1km. Therefore, variability of seaways from standard spectral shapes and spatial inconsistency between the measurement point and the device site will cause inaccuracies in the performance assessment. This thesis categorises time and frequency domain analysis techniques that can be used to identify changes in a sea state from record to record. Device specific issues such as dimensional scaling of sea states and power output are discussed along with potential differences that arise in estimated and actual output power of a WEC due to spectral shape variation. This is investigated using measured data from various phases of device development.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.
Resumo:
An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
This paper investigates the problem of appropriate load sharing in an autonomous microgrid. High gain angle droop control ensures proper load sharing, especially under weak system conditions. However it has a negative impact on overall stability. Frequency domain modeling, eigenvalue analysis and time domain simulations are used to demonstrate this conflict. A supplementary loop is proposed around a conventional droop control of each DG converter to stabilize the system while using high angle droop gains. Control loops are based on local power measurement and modulation of the d-axis voltage reference of each converter. Coordinated design of supplementary control loops for each DG is formulated as a parameter optimization problem and solved using an evolutionary technique. The sup-plementary droop control loop is shown to stabilize the system for a range of operating conditions while ensuring satisfactory load sharing.
Resumo:
In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.