76 resultados para TIME-DOMAIN METHOD
Resumo:
The morphology of plasmonic nano-assemblies has a direct influence on optical properties, such as localised surface plasmon resonance (LSPR) and surface enhanced Raman scattering (SERS) intensity. Assemblies with core-satellite morphologies are of particular interest, because this morphology has a high density of hot-spots, while constraining the overall size. Herein, a simple method is reported for the self-assembly of gold NPs nano-assemblies with a core-satellite morphology, which was mediated by hyperbranched polymer (HBP) linkers. The HBP linkers have repeat units that do not interact strongly with gold NPs, but have multiple end-groups that specifically interact with the gold NPs and act as anchoring points resulting in nano-assemblies with a large (~48 nm) core surrounded by smaller (~15 nm) satellites. It was possible to control the number of satellites in an assembly which allowed optical parameters such as SPR maxima and the SERS intensity to be tuned. These results were found to be consistent with finite-difference time domain (FDTD) simulations. Furthermore, the multiplexing of the nano-assemblies with a series of Raman tag molecules was demonstrated, without an observable signal arising from the HBP linker after tagging. Such plasmonic nano-assemblies could potentially serve as efficient SERS based diagnostics or biomedical imaging agents in nanomedicine.
Resumo:
This paper proposes a nonlinear excitation controller to improve transient stability, oscillation damping and voltage regulation of the power system. The energy function of the predicted system states is used to obtain the desired flux for the next time step, which in turn is used to obtain a supplementary control input using an inverse filtering method. The inverse filtering technique enables the system to provide an additional input for the excitation system, which forces the system to track the desired flux. Synchronous generator flux saturation model is used in this paper. A single machine infinite bus (SMIB) test system is used to demonstrate the efficacy of the proposed control method using time-domain simulations. The robustness of the controller is assessed under different operating conditions.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
Purpose – In structural, earthquake and aeronautical engineering and mechanical vibration, the solution of dynamic equations for a structure subjected to dynamic loading leads to a high order system of differential equations. The numerical methods are usually used for integration when either there is dealing with discrete data or there is no analytical solution for the equations. Since the numerical methods with more accuracy and stability give more accurate results in structural responses, there is a need to improve the existing methods or develop new ones. The paper aims to discuss these issues. Design/methodology/approach – In this paper, a new time integration method is proposed mathematically and numerically, which is accordingly applied to single-degree-of-freedom (SDOF) and multi-degree-of-freedom (MDOF) systems. Finally, the results are compared to the existing methods such as Newmark’s method and closed form solution. Findings – It is concluded that, in the proposed method, the data variance of each set of structural responses such as displacement, velocity, or acceleration in different time steps is less than those in Newmark’s method, and the proposed method is more accurate and stable than Newmark’s method and is capable of analyzing the structure at fewer numbers of iteration or computation cycles, hence less time-consuming. Originality/value – A new mathematical and numerical time integration method is proposed for the computation of structural responses with higher accuracy and stability, lower data variance, and fewer numbers of iterations for computational cycles.
Resumo:
The focus of this paper is two-dimensional computational modelling of water flow in unsaturated soils consisting of weakly conductive disconnected inclusions embedded in a highly conductive connected matrix. When the inclusions are small, a two-scale Richards’ equation-based model has been proposed in the literature taking the form of an equation with effective parameters governing the macroscopic flow coupled with a microscopic equation, defined at each point in the macroscopic domain, governing the flow in the inclusions. This paper is devoted to a number of advances in the numerical implementation of this model. Namely, by treating the micro-scale as a two-dimensional problem, our solution approach based on a control volume finite element method can be applied to irregular inclusion geometries, and, if necessary, modified to account for additional phenomena (e.g. imposing the macroscopic gradient on the micro-scale via a linear approximation of the macroscopic variable along the microscopic boundary). This is achieved with the help of an exponential integrator for advancing the solution in time. This time integration method completely avoids generation of the Jacobian matrix of the system and hence eases the computation when solving the two-scale model in a completely coupled manner. Numerical simulations are presented for a two-dimensional infiltration problem.
Resumo:
The integration of stochastic wind power has accentuated a challenge for power system stability assessment. Since the power system is a time-variant system under wind generation fluctuations, pure time-domain simulations are difficult to provide real-time stability assessment. As a result, the worst-case scenario is simulated to give a very conservative assessment of system transient stability. In this study, a probabilistic contingency analysis through a stability measure method is proposed to provide a less conservative contingency analysis which covers 5-min wind fluctuations and a successive fault. This probabilistic approach would estimate the transfer limit of a critical line for a given fault with stochastic wind generation and active control devices in a multi-machine system. This approach achieves a lower computation cost and improved accuracy using a new stability measure and polynomial interpolation, and is feasible for online contingency analysis.
Resumo:
Introduction: Work engagement is a recent application of positive psychology and refers to a positive, fulfilling, work-related state of mind characterized by vigor, dedication and absorption. Despite theoretical assumptions, there is little published research on work engagement, due primarily to its recent emergence as a psychological construct. Furthermore, examining work engagement among high-stress occupations, such as police, is useful because police officers are generally characterized as having a high level of work engagement. Previous research has identified job resources (e.g. social support) as antecedents of work engagement. However detailed evaluation of job demands as an antecedent of work engagement within high-stress occupations has been scarce. Thus our second aim was to test job demands (i.e. monitoring demands and problem-solving demands) and job resources (i.e. time control, method control, supervisory support, colleague support, and friend and family support) as antecedents of work engagement among police officers. Method: Data were collected via a self-report online survey from one Australian state police service (n = 1,419). Due to the high number of hypothesized antecedent variables, hierarchical multiple regression analysis was employed rather than structural equation modelling. Results: Work engagement reported by police officers was high. Female officers had significantly higher levels of work engagement than male officers, while officers at mid-level ranks (sergeant) reported the lowest levels of work engagement. Job resources (method control, supervisor support and colleague support) were significant antecedents of three dimensions of work engagement. Only monitoring demands were significant antecedent of the absorption. Conclusion: Having healthy and engaged police officers is important for community security and economic growth. This study identified some common factors which influence work engagement experienced by police officers. However, we also note that excessive work engagement can yield negative outcomes such as psychological distress.
Resumo:
The international focus on embracing daylighting for energy efficient lighting purposes and the corporate sector’s indulgence in the perception of workplace and work practice “transparency” has spurned an increase in highly glazed commercial buildings. This in turn has renewed issues of visual comfort and daylight-derived glare for occupants. In order to ascertain evidence, or predict risk, of these events; appraisals of these complex visual environments require detailed information on the luminances present in an occupant’s field of view. Conventional luminance meters are an expensive and time consuming method of achieving these results. To create a luminance map of an occupant’s visual field using such a meter requires too many individual measurements to be a practical measurement technique. The application of digital cameras as luminance measurement devices has solved this problem. With high dynamic range imaging, a single digital image can be created to provide luminances on a pixel-by-pixel level within the broad field of view afforded by a fish-eye lens: virtually replicating an occupant’s visual field and providing rapid yet detailed luminance information for the entire scene. With proper calibration, relatively inexpensive digital cameras can be successfully applied to the task of luminance measurements, placing them in the realm of tools that any lighting professional should own. This paper discusses how a digital camera can become a luminance measurement device and then presents an analysis of results obtained from post occupancy measurements from building assessments conducted by the Mobile Architecture Built Environment Laboratory (MABEL) project. This discussion leads to the important realisation that the placement of such tools in the hands of lighting professionals internationally will provide new opportunities for the lighting community in terms of research on critical issues in lighting such as daylight glare and visual quality and comfort.
Resumo:
This paper investigates the problem of appropriate load sharing in an autonomous microgrid. High gain angle droop control ensures proper load sharing, especially under weak system conditions. However it has a negative impact on overall stability. Frequency domain modeling, eigenvalue analysis and time domain simulations are used to demonstrate this conflict. A supplementary loop is proposed around a conventional droop control of each DG converter to stabilize the system while using high angle droop gains. Control loops are based on local power measurement and modulation of the d-axis voltage reference of each converter. Coordinated design of supplementary control loops for each DG is formulated as a parameter optimization problem and solved using an evolutionary technique. The sup-plementary droop control loop is shown to stabilize the system for a range of operating conditions while ensuring satisfactory load sharing.
Resumo:
In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.
Resumo:
As the use of renewable energy sources (RESs) increases worldwide, there is a rising interest on their impacts on power system operation and control. An overview of the key issues and new challenges on frequency regulation concerning the integration of renewable energy units into the power systems is presented. Following a brief survey on the existing challenges and recent developments, the impact of power fluctuation produced by variable renewable sources (such as wind and solar units) on sysstem frequency performance is also presented. An updated LFC model is introduced, and power system frequency response in the presence of RESs and associated issues is analysed. The need for the revising of frequency performance standards is emphasised. Finally, non-linear time-domain simulations on the standard 39-bus and 24-bus test systems show that the simulated results agree with those predicted analytically.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
The use of metal stripes for the guiding of plasmons is a well established technique for the infrared regime and has resulted in the development of a myriad of passive optical components and sensing devices. However, the plasmons suffer from large losses around sharp bends, making the compact design of nanoscale sensors and circuits problematic. A compact alternative would be to use evanescent coupling between two sufficiently close stripes, and thus we propose a compact interferometer design using evanescent coupling. The sensitivity of the design is compared with that achieved using a hand-held sensor based on the Kretschmann style surface plasmon resonance technique. Modeling of the new interferometric sensor is performed for various structural parameters using finite-difference time-domain and COMSOL Multiphysics. The physical mechanisms behind the coupling and propagation of plasmons in this structure are explained in terms of the allowed modes in each section of the device.