944 resultados para Parameters estimation
Resumo:
Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of Distributed Generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. This paper addresses the issue of improving the network voltage profile in distribution systems by installing a DG of the most suitable size, at a suitable location. An analytical approach is developed based on algebraic equations for uniformly distributed loads to determine the optimal operation, size and location of the DG in order to achieve required levels of network voltage. The developed method is simple to use for conceptual design and analysis of distribution system expansion with a DG and suitable for a quick estimation of DG parameters (such as optimal operating angle, size and location of a DG system) in a radial network. A practical network is used to verify the proposed technique and test results are presented.
Resumo:
This paper presents a method for the estimation of thrust model parameters of uninhabited airborne systems using specific flight tests. Particular tests are proposed to simplify the estimation. The proposed estimation method is based on three steps. The first step uses a regression model in which the thrust is assumed constant. This allows us to obtain biased initial estimates of the aerodynamic coeficients of the surge model. In the second step, a robust nonlinear state estimator is implemented using the initial parameter estimates, and the model is augmented by considering the thrust as random walk. In the third step, the estimate of the thrust obtained by the observer is used to fit a polynomial model in terms of the propeller advanced ratio. We consider a numerical example based on Monte-Carlo simulations to quantify the sampling properties of the proposed estimator given realistic flight conditions.
Resumo:
Commodity price modeling is normally approached in terms of structural time-series models, in which the different components (states) have a financial interpretation. The parameters of these models can be estimated using maximum likelihood. This approach results in a non-linear parameter estimation problem and thus a key issue is how to obtain reliable initial estimates. In this paper, we focus on the initial parameter estimation problem for the Schwartz-Smith two-factor model commonly used in asset valuation. We propose the use of a two-step method. The first step considers a univariate model based only on the spot price and uses a transfer function model to obtain initial estimates of the fundamental parameters. The second step uses the estimates obtained in the first step to initialize a re-parameterized state-space-innovations based estimator, which includes information related to future prices. The second step refines the estimates obtained in the first step and also gives estimates of the remaining parameters in the model. This paper is part tutorial in nature and gives an introduction to aspects of commodity price modeling and the associated parameter estimation problem.
Resumo:
A novel gray-box neural network model (GBNNM), including multi-layer perception (MLP) neural network (NN) and integrators, is proposed for a model identification and fault estimation (MIFE) scheme. With the GBNNM, both the nonlinearity and dynamics of a class of nonlinear dynamic systems can be approximated. Unlike previous NN-based model identification methods, the GBNNM directly inherits system dynamics and separately models system nonlinearities. This model corresponds well with the object system and is easy to build. The GBNNM is embedded online as a normal model reference to obtain the quantitative residual between the object system output and the GBNNM output. This residual can accurately indicate the fault offset value, so it is suitable for differing fault severities. To further estimate the fault parameters (FPs), an improved extended state observer (ESO) using the same NNs (IESONN) from the GBNNM is proposed to avoid requiring the knowledge of ESO nonlinearity. Then, the proposed MIFE scheme is applied for reaction wheels (RW) in a satellite attitude control system (SACS). The scheme using the GBNNM is compared with other NNs in the same fault scenario, and several partial loss of effect (LOE) faults with different severities are considered to validate the effectiveness of the FP estimation and its superiority.
Resumo:
The issue of using informative priors for estimation of mixtures at multiple time points is examined. Several different informative priors and an independent prior are compared using samples of actual and simulated aerosol particle size distribution (PSD) data. Measurements of aerosol PSDs refer to the concentration of aerosol particles in terms of their size, which is typically multimodal in nature and collected at frequent time intervals. The use of informative priors is found to better identify component parameters at each time point and more clearly establish patterns in the parameters over time. Some caveats to this finding are discussed.
Resumo:
After attending this presentation, attendees will gain awareness of the ontogeny of cranial maturation, specifically: (1) the fusion timings of primary ossification centers in the basicranium; and (2) the temporal pattern of closure of the anterior fontanelle, to develop new population-specific age standards for medicolegal death investigation of Australian subadults. This presentation will impact the forensic science community by demonstrating the potential of a contemporary forensic subadult Computed Tomography (CT) database of cranial scans and population data, to recalibrate existing standards for age estimation and quantify growth and development of Australian children. This research welcomes a study design applicable to all countries faced with paucity in skeletal repositories. Accurate assessment of age-at-death of skeletal remains represents a key element in forensic anthropology methodology. In Australian casework, age standards derived from American reference samples are applied in light of scarcity in documented Australian skeletal collections. Currently practitioners rely on antiquated standards, such as the Scheuer and Black1 compilation for age estimation, despite implications of secular trends and population variation. Skeletal maturation standards are population specific and should not be extrapolated from one population to another, while secular changes in skeletal dimensions and accelerated maturation underscore the importance of establishing modern standards to estimate age in modern subadults. Despite CT imaging becoming the gold standard for skeletal analysis in Australia, practitioners caution the application of forensic age standards derived from macroscopic inspection to a CT medium, suggesting a need for revised methodologies. Multi-slice CT scans of subadult crania and cervical vertebrae 1 and 2 were acquired from 350 Australian individuals (males: n=193, females: n=157) aged birth to 12 years. The CT database, projected at 920 individuals upon completion (January 2014), comprises thin-slice DICOM data (resolution: 0.5/0.3mm) of patients scanned since 2010 at major Brisbane Childrens Hospitals. DICOM datasets were subject to manual segmentation, followed by the construction of multi-planar and volume rendering cranial models, for subsequent scoring. The union of primary ossification centers of the occipital bone were scored as open, partially closed or completely closed; while the fontanelles, and vertebrae were scored in accordance with two stages. Transition analysis was applied to elucidate age at transition between union states for each center, and robust age parameters established using Bayesian statistics. In comparison to reported literature, closure of the fontanelles and contiguous sutures in Australian infants occur earlier than reported, with the anterior fontanelle transitioning from open to closed at 16.7±1.1 months. The metopic suture is closed prior to 10 weeks post-partum and completely obliterated by 6 months of age, independent of sex. Utilizing reverse engineering capabilities, an alternate method for infant age estimation based on quantification of fontanelle area and non-linear regression with variance component modeling will be presented. Closure models indicate that the greatest rate of change in anterior fontanelle area occurs prior to 5 months of age. This study complements the work of Scheuer and Black1, providing more specific age intervals for union and temporal maturity of each primary ossification center of the occipital bone. For example, dominant fusion of the sutura intra-occipitalis posterior occurs before 9 months of age, followed by persistence of a hyaline cartilage tongue posterior to the foramen magnum until 2.5 years; with obliteration at 2.9±0.1 years. Recalibrated age parameters for the atlas and axis are presented, with the anterior arch of the atlas appearing at 2.9 months in females and 6.3 months in males; while dentoneural, dentocentral and neurocentral junctions of the axis transitioned from non-union to union at 2.1±0.1 years in females and 3.7±0.1 years in males. These results are an exemplar of significant sexual dimorphism in maturation (p<0.05), with girls exhibiting union earlier than boys, justifying the need for segregated sex standards for age estimation. Studies such as this are imperative for providing updated standards for Australian forensic and pediatric practice and provide an insight into skeletal development of this population. During this presentation, the utility of novel regression models for age estimation of infants will be discussed, with emphasis on three-dimensional modeling capabilities of complex structures such as fontanelles, for the development of new age estimation methods.
Resumo:
This paper develops maximum likelihood (ML) estimation schemes for finite-state semi-Markov chains in white Gaussian noise. We assume that the semi-Markov chain is characterised by transition probabilities of known parametric from with unknown parameters. We reformulate this hidden semi-Markov model (HSM) problem in the scalar case as a two-vector homogeneous hidden Markov model (HMM) problem in which the state consist of the signal augmented by the time to last transition. With this reformulation we apply the expectation Maximumisation (EM ) algorithm to obtain ML estimates of the transition probabilities parameters, Markov state levels and noise variance. To demonstrate our proposed schemes, motivated by neuro-biological applications, we use a damped sinusoidal parameterised function for the transition probabilities.
Resumo:
Rapid recursive estimation of hidden Markov Model (HMM) parameters is important in applications that place an emphasis on the early availability of reasonable estimates (e.g. for change detection) rather than the provision of longer-term asymptotic properties (such as convergence, convergence rate, and consistency). In the context of vision- based aircraft (image-plane) heading estimation, this paper suggests and evaluates the short-data estimation properties of 3 recursive HMM parameter estimation techniques (a recursive maximum likelihood estimator, an online EM HMM estimator, and a relative entropy based estimator). On both simulated and real data, our studies illustrate the feasibility of rapid recursive heading estimation, but also demonstrate the need for careful step-size design of HMM recursive estimation techniques when these techniques are intended for use in applications where short-data behaviour is paramount.
Resumo:
Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
The extended recruitment season for short-lived species such as prawns biases the estimation of growth parameters from length-frequency data when conventional methods are used. We propose a simple method for overcoming this bias given a time series of length-frequency data. The difficulties arising from extended recruitment are eliminated by predicting the growth of the succeeding samples and the length increments of the recruits in previous samples. This method requires that some maximum size at recruitment can be specified. The advantages of this multiple length-frequency method are: it is simple to use; it requires only three parameters; no specific distributions need to be assumed; and the actual seasonal recruitment pattern does not have to be specified. We illustrate the new method with length-frequency data on the tiger prawn Penaeus esculentus from the north-western Gulf of Carpentaria, Australia.