983 resultados para GAUSSIAN GENERATOR FUNCTIONS
Resumo:
Many optical networks are limited in speed and processing capability due to the necessity for the optical signal to be converted to an electrical signal and back again. In addition, electronically manipulated interconnects in an otherwise optical network lead to overly complicated systems. Optical spatial solitons are optical beams that propagate without spatial divergence. They are capable of phase dependent interactions, and have therefore been extensively researched as suitable all optical interconnects for over 20 years. However, they require additional external components, initially high voltage power sources were required, several years later, high power background illumination had replaced the high voltage. However, these additional components have always remained as the greatest hurdle in realising the applications of the interactions of spatial optical solitons as all optical interconnects. Recently however, self-focusing was observed in an otherwise self-defocusing photorefractive crystal. This observation raises the possibility of the formation of soliton-like fields in unbiased self-defocusing media, without the need for an applied electrical field or background illumination. This thesis will present an examination of the possibility of the formation of soliton-like low divergence fields in unbiased self-defocusing photorefractive media. The optimal incident beam and photorefractive media parameters for the formation of these fields will be presented, together with an analytical and numerical study of the effect of these parameters. In addition, preliminary examination of the interactions of two of these fields will be presented. In order to complete an analytical examination of the field propagating through the photorefractive medium, the spatial profile of the beam after propagation through the medium was determined. For a low power solution, it was found that an incident Gaussian field maintains its Gaussian profile as it propagates. This allowed the beam at all times to be described by an individual complex beam parameter, while also allowing simple analytical solutions to the appropriate wave equation. An analytical model was developed to describe the effect of the photorefractive medium on the Gaussian beam. Using this model, expressions for the required intensity dependent change in both the real and imaginary components of the refractive index were found. Numerical investigation showed that under certain conditions, a low powered Gaussian field could propagate in self-defocusing photorefractive media with divergence of approximately 0.1 % per metre. An investigation into the parameters of a Ce:BaTiO3 crystal showed that the intensity dependent absorption is wavelength dependent, and can in fact transition to intensity dependent transparency. Thus, with careful wavelength selection, the required intensity dependent change in both the real and imaginary components of the refractive index for the formation of a low divergence Gaussian field are physically realisable. A theoretical model incorporating the dependence of the change in real and imaginary components of the refractive index on propagation distance was developed. Analytical and numerical results from this model are congruent with the results from the previous model, showing low divergence fields with divergence less than 0.003 % over the propagation length of the photorefractive medium. In addition, this approach also confirmed the previously mentioned self-focusing effect of the self-defocusing media, and provided an analogy to a negative index GRIN lens with an intensity dependent focal length. Experimental results supported the findings of the numerical analysis. Two low divergence fields were found to possess the ability to interact in a Ce:BaTiO3 crystal in a soliton-like fashion. The strength of these interactions was found to be dependent on the degree of divergence of the individual beams. This research found that low-divergence fields are possible in unbiased self-defocusing photorefractive media, and that soliton-like interactions between two of these fields are possible. However, in order for these types of fields to be used in future all optical interconnects, the manipulation of these interactions, together with the ability for these fields to guide a second beam at a different wavelength, must be investigated.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
In this study we set out to dissociate the developmental time course of automatic symbolic number processing and cognitive control functions in grade 1-3 British primary school children. Event-related potential (ERP) and behavioral data were collected in a physical size discrimination numerical Stroop task. Task-irrelevant numerical information was processed automatically already in grade 1. Weakening interference and strengthening facilitation indicated the parallel development of general cognitive control and automatic number processing. Relationships among ERP and behavioral effects suggest that control functions play a larger role in younger children and that automaticity of number processing increases from grade 1 to 3.
Resumo:
This paper presents an approach to building an observation likelihood function from a set of sparse, noisy training observations taken from known locations by a sensor with no obvious geometric model. The basic approach is to fit an interpolant to the training data, representing the expected observation, and to assume additive sensor noise. This paper takes a Bayesian view of the problem, maintaining a posterior over interpolants rather than simply the maximum-likelihood interpolant, giving a measure of uncertainty in the map at any point. This is done using a Gaussian process framework. To validate the approach experimentally, a model of an environment is built using observations from an omni-directional camera. After a model has been built from the training data, a particle filter is used to localise while traversing this environment
Resumo:
This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
The Midwestern US is a wind-rich resource and wind power is being developed in this region at a very brisk pace. Transporting this energy resource to load centers invariably requires massive transmission lines. This issue of developing additional transmission to support reliable integration of wind on to the power grid provides a multitude of interesting challenges spanning various areas of power systems such as transmission planning, real-time operations and cost-allocation for new transmission. The Midwest ISO as a regional transmission provider is responsible for processing requests to interconnect proposed generation on to the transmission grid under its purview. This paper provides information about some of the issues faced in performing interconnection planning studies and Midwest ISO's efforts to improve its generator interconnection procedures. Related cost-allocation efforts currently ongoing at the Midwest ISO to streamline integration of bulk quantities of wind power in to the transmission grid are also presented.
Resumo:
The security of power transfer across a given transmission link is typically a steady state assessment. This paper develops tools to assess machine angle stability as affected by a combination of faults and uncertainty of wind power using probability analysis. The paper elaborates on the development of the theoretical assessment tool and demonstrates its efficacy using single machine infinite bus system.
Resumo:
Today, a large number of wind generator interconnection requests have been queued and are being processed. The generator interconnection group study is a way to reduce the generator interconnection cycle time and increase interconnection certainty. However, it is very challenging to identify the “best” transmission upgrades for a large group of generator interconnections. It is also very important to differentiate the constraints caused by each generator interconnection request and identify their responsibilities for transmission upgrades. This paper outlines some innovative study approaches that can be used in a group study with large numbers of generator interconnection requests in a constrained area. Improved study methods are introduced, and a summary and conclusions are derived from the study.
Resumo:
This paper presents a model for generating a MAC tag by injecting the input message directly into the internal state of a nonlinear filter generator. This model generalises a similar model for unkeyed hash functions proposed by Nakano et al. We develop a matrix representation for the accumulation phase of our model and use it to analyse the security of the model against man-in-the-middle forgery attacks based on collisions in the final register contents. The results of this analysis show that some conclusions of Nakano et al regarding the security of their model are incorrect. We also use our results to comment on several recent MAC proposals which can be considered as instances of our model and specify choices of options within the model which should prevent the type of forgery discussed here. In particular, suitable initialisation of the register and active use of a secure nonlinear filter will prevent an attacker from finding a collision in the final register contents which could result in a forged MAC.
Resumo:
The research described in this paper forms part of an in-depth investigation of safety culture in one of Australia’s largest construction companies. The research builds on a previous qualitative study with organisational safety leaders and further investigates how safety culture is perceived and experienced by organisational members, as well as how this relates to their safety behaviour and related outcomes at work. Participants were 2273 employees of the case study organisation, with 689 from the Construction function and 1584 from the Resources function. The results of several analyses revealed some interesting organisational variance on key measures. Specifically, the Construction function scored significantly higher on all key measures: safety climate, safety motivation, safety compliance, and safety participation. The results are discussed in terms of relevance in an applied research context.
Resumo:
This paper presents a novel power control strategy that decouples the active and reactive power for a synchronous generator connected to a power network. The proposed control paradigm considers the capacitance of the transmission line along with its resistance and reactance as-well. Moreover the proposed controller takes into account all cases of R-X relationships, thus allowing it to function in Virtual Power Plant (VPP) structures which operate at both medium voltage (MV) and low voltage (LV) levels. The independent control of active and reactive power is achieved through rotational transformations of the terminal voltages and currents at the synchronous generator's output. This paper details the control technique by first presenting the mathematical and electrical network analysis of the methodology and then successfully implementing the control using MATLAB-SIMULINK simulation.
Resumo:
In 1980 Alltop produced a family of cubic phase sequences that nearly meet the Welch bound for maximum non-peak correlation magnitude. This family of sequences were shown by Wooters and Fields to be useful for quantum state tomography. Alltop’s construction used a function that is not planar, but whose difference function is planar. In this paper we show that Alltop type functions cannot exist in fields of characteristic 3 and that for a known class of planar functions, x^3 is the only Alltop type function.