959 resultados para Statistical parameters
Resumo:
This paper presents a new algorithm based on a Modified Particle Swarm Optimization (MPSO) to estimate the harmonic state variables in a distribution networks. The proposed algorithm performs the estimation for both amplitude and phase of each injection harmonic currents by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as the uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WTs). The main features of the proposed MPSO algorithm are usage of a primary and secondary PSO loop and applying the mutation function. The simulation results on 34-bus IEEE radial and a 70-bus realistic radial test networks are presented. The results demonstrate that the speed and the accuracy of the proposed Distribution Harmonic State Estimation (DHSE) algorithm are very excellent compared to the algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO, and Honey Bees Mating Optimization (HBMO).
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
This paper presents a new algorithm based on a Hybrid Particle Swarm Optimization (PSO) and Simulated Annealing (SA) called PSO-SA to estimate harmonic state variables in distribution networks. The proposed algorithm performs estimation for both amplitude and phase of each harmonic currents injection by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WT). The main feature of proposed PSO-SA algorithm is to reach quickly around the global optimum by PSO with enabling a mutation function and then to find that optimum by SA searching algorithm. Simulation results on IEEE 34 bus radial and a realistic 70-bus radial test networks are presented to demonstrate the speed and accuracy of proposed Distribution Harmonic State Estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO and Honey Bees Mating Optimization (HBMO) algorithm.
Resumo:
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.
Resumo:
This thesis is the first comprehensive study of important parameters relating to aerosols' impact on climate and human health, namely spatial variation, particle size distribution and new particle formation. We determined the importance of spatial variation of particle number concentration in microscale environments, developed a method for particle size parameterisation and provided knowledge about the chemistry of new particle formation. This is a significant contribution to our understanding of processes behind the transformation and dynamics of urban aerosols. This PhD project included extensive measurements of air quality parameters using state of the art instrumentation at each of the 25 sites within the Brisbane metropolitan area and advanced statistical analysis.
Resumo:
For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.
Resumo:
This chapter addresses data modelling as a means of promoting statistical literacy in the early grades. Consideration is first given to the importance of increasing young children’s exposure to statistical reasoning experiences and how data modelling can be a rich means of doing so. Selected components of data modelling are then reviewed, followed by a report on some findings from the third-year of a three-year longitudinal study across grades one through three.
Resumo:
At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.
Resumo:
The aim of the current study was to estimate heritabilities and correlations for body traits at different ages (Weeks 10 and 18 after stocking) in a giant freshwater prawn (Macrobrachium rosenbergii) population selected for fast growth rate in Vietnam. The dataset consisted of 4650 body records (2432 and 2218 records collected at Weeks 10 and 18, respectively) in the full pedigree comprising a total of 18 387 records. Variance and covariance components were estimated using restricted maximum likelihood fitting a multi-trait animal model. Estimates of heritability for body traits (bodyweight, body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width) were moderate and ranged from 0.06 to 0.11 and from 0.11 to 0.22 at Weeks 10 and 18, respectively. Body-trait heritabilities estimated at Week 10 were not significantly lower than at Week 18. Genetic correlations between body traits within age and genetic correlations for body traits between ages were generally high. Our results suggested that selection for high growth rate in GFP can be undertaken successfully before full market size has been reached.
Resumo:
Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of Distributed Generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. This paper addresses the issue of improving the network voltage profile in distribution systems by installing a DG of the most suitable size, at a suitable location. An analytical approach is developed based on algebraic equations for uniformly distributed loads to determine the optimal operation, size and location of the DG in order to achieve required levels of network voltage. The developed method is simple to use for conceptual design and analysis of distribution system expansion with a DG and suitable for a quick estimation of DG parameters (such as optimal operating angle, size and location of a DG system) in a radial network. A practical network is used to verify the proposed technique and test results are presented.
Resumo:
This paper presents a nonlinear observer for estimating parameters associated with the restoring term of a roll motion model of a marine vessel in longitudinal waves. Changes in restoring, also referred to as transverse stability, can be the result of changes in the vessel's centre of gravity due to, for example, water on deck and also in changes in the buoyancy triggered by variations in the water-plane area produced by longitudinal waves -- propagating along the fore-aft direction along the hull. These variations in the restoring can change dramatically the dynamics of the roll motion leading to dangerous resonance. Therefore, it is of interest to estimate and detect such changes.
Resumo:
Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.
Resumo:
In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.
Resumo:
The cotton strip assay (CSA) is an established technique for measuring soil microbial activity. The technique involves burying cotton strips and measuring their tensile strength after a certain time. This gives a measure of the rotting rate, R, of the cotton strips. R is then a measure of soil microbial activity. This paper examines properties of the technique and indicates how the assay can be optimised. Humidity conditioning of the cotton strips before measuring their tensile strength reduced the within and between day variance and enabled the distribution of the tensile strength measurements to approximate normality. The test data came from a three-way factorial experiment (two soils, two temperatures, three moisture levels). The cotton strips were buried in the soil for intervals of time ranging up to 6 weeks. This enabled the rate of loss of cotton tensile strength with time to be studied under a range of conditions. An inverse cubic model accounted for greater than 90% of the total variation within each treatment combination. This offers support for summarising the decomposition process by a single parameter R. The approximate variance of the decomposition rate was estimated from a function incorporating the variance of tensile strength and the differential of the function for the rate of decomposition, R, with respect to tensile strength. This variance function has a minimum when the measured strength is approximately 2/3 that of the original strength. The estimates of R are almost unbiased and relatively robust against the cotton strips being left in the soil for more or less than the optimal time. We conclude that the rotting rate X should be measured using the inverse cubic equation, and that the cotton strips should be left in the soil until their strength has been reduced to about 2/3.