15 resultados para parameter sensitivity analysis
em University of Queensland eSpace - Australia
Resumo:
Deregulations and market practices in power industry have brought great challenges to the system planning area. In particular, they introduce a variety of uncertainties to system planning. New techniques are required to cope with such uncertainties. As a promising approach, probabilistic methods are attracting more and more attentions by system planners. In small signal stability analysis, generation control parameters play an important role in determining the stability margin. The objective of this paper is to investigate power system state matrix sensitivity characteristics with respect to system parameter uncertainties with analytical and numerical approaches and to identify those parameters have great impact on system eigenvalues, therefore, the system stability properties. Those identified parameter variations need to be investigated with priority. The results can be used to help Regional Transmission Organizations (RTOs) and Independent System Operators (ISOs) perform planning studies under the open access environment.
Resumo:
This paper presents a method to analyze the first order eigenvalue sensitivity with respect to the operating parameters of a power system. The method is based on explicitly expressing the system state matrix into sub-matrices. The eigenvalue sensitivity is calculated based on the explicitly formed system state matrix. The 4th order generator model and 4th order exciter system model are used to form the system state matrix. A case study using New England 10-machine 39-bus system is provided to demonstrate the effectiveness of the proposed method. This method can be applied into large scale power system eigenvalue sensitivity with respect to operating parameters.
Resumo:
Validation procedures play an important role in establishing the credibility of models, improving their relevance and acceptability. This article reviews the testing of models relevant to environmental and natural resource management with particular emphasis on models used in multicriteria analysis (MCA). Validation efforts for a model used in a MCA catchment management study in North Queensland, Australia, are presented. Determination of face validity is found to be a useful approach in evaluating this model, and sensitivity analysis is useful in checking the stability of the model. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The robustness of mathematical models for biological systems is studied by sensitivity analysis and stochastic simulations. Using a neural network model with three genes as the test problem, we study robustness properties of synthesis and degradation processes. For single parameter robustness, sensitivity analysis techniques are applied for studying parameter variations and stochastic simulations are used for investigating the impact of external noise. Results of sensitivity analysis are consistent with those obtained by stochastic simulations. Stochastic models with external noise can be used for studying the robustness not only to external noise but also to parameter variations. For external noise we also use stochastic models to study the robustness of the function of each gene and that of the system.
Resumo:
Bifurcation analysis is a very useful tool for power system stability assessment. In this paper, detailed investigation of power system bifurcation behaviour is presented. One and two parameter bifurcation analysis are conducted on a 3-bus power system. We also examined the impact of FACTS devices on power system stability through Hopf bifurcation analysis by taking static Var compensator (SVC) as an example. A simplified first-order model of the SVC device is included in the 3-bus sample system. Real and reactive powers are used as bifurcation parameter in the analysis to compare the system oscillatory properties with and without SVC. The simulation results indicate that the linearized system model with SVC enlarge the voltage stability boundary by moving Hopf bifurcation point to higher level of loading conditions. The installation of SVC increases the dynamic stability range of the system, however complicates the Hopf bifurcation behavior of the system
Resumo:
The aim of this study was to determine the most informative sampling time(s) providing a precise prediction of tacrolimus area under the concentration-time curve (AUC). Fifty-four concentration-time profiles of tacrolimus from 31 adult liver transplant recipients were analyzed. Each profile contained 5 tacrolimus whole-blood concentrations (predose and 1, 2, 4, and 6 or 8 hours postdose), measured using liquid chromatography-tandem mass spectrometry. The concentration at 6 hours was interpolated for each profile, and 54 values of AUC(0-6) were calculated using the trapezoidal rule. The best sampling times were then determined using limited sampling strategies and sensitivity analysis. Linear mixed-effects modeling was performed to estimate regression coefficients of equations incorporating each concentration-time point (C0, C1, C2, C4, interpolated C5, and interpolated C6) as a predictor of AUC(0-6). Predictive performance was evaluated by assessment of the mean error (ME) and root mean square error (RMSE). Limited sampling strategy (LSS) equations with C2, C4, and C5 provided similar results for prediction of AUC(0-6) (R-2 = 0.869, 0.844, and 0.832, respectively). These 3 time points were superior to C0 in the prediction of AUC. The ME was similar for all time points; the RMSE was smallest for C2, C4, and C5. The highest sensitivity index was determined to be 4.9 hours postdose at steady state, suggesting that this time point provides the most information about the AUC(0-12). The results from limited sampling strategies and sensitivity analysis supported the use of a single blood sample at 5 hours postdose as a predictor of both AUC(0-6) and AUC(0-12). A jackknife procedure was used to evaluate the predictive performance of the model, and this demonstrated that collecting a sample at 5 hours after dosing could be considered as the optimal sampling time for predicting AUC(0-6).
Resumo:
The growth behaviour of the vibrational wear phenomenon known as rail corrugation is investigated analytically and numerically using mathematical models. A simplified feedback model for wear-type rail corrugation that includes a wheel pass time delay is developed with an aim to analytically distil the most critical interaction occurring between the wheel/rail structural dynamics, rolling contact mechanics and rail wear. To this end, a stability analysis on the complete system is performed to determine the growth of wear-type rail corrugations over multiple wheelset passages. This analysis indicates that although the dynamical behaviour of the system is stable for each wheel passage, over multiple wheelset passages, the growth of wear-type corrugations is shown to be the result of instability due to feedback interaction between the three primary components of the model. The corrugations are shown analytically to grow for all realistic railway parameters. From this analysis an analytical expression for the exponential growth rate of corrugations in terms of known parameters is developed. This convenient expression is used to perform a sensitivity analysis to identify critical parameters that most affect corrugation growth. The analytical predictions are shown to compare well with results from a benchmarked time-domain finite element model. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Minimal representations are known to have no redundant elements, and are therefore of great importance. Based on the notions of performance and size indices and measures for process systems, the paper proposes conditions for a process model being minimal in a set of functionally equivalent models with respect to a size norm. Generalized versions of known procedures to obtain minimal process models for a given modelling goal, model reduction based on sensitivity analysis and incremental model building are proposed and discussed. The notions and procedures are illustrated and compared on a simple example, that of a simple nonlinear fermentation process with different modelling goals and on a case study of a heat exchanger modelling. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Objective: Existing evidence suggests that vocational rehabilitation services, in particular individual placement and support (IPS), are effective in assisting people with schizophrenia and related conditions gain open employment. Despite this, such services are not available to all unemployed people with schizophrenia who wish to work. Existing evidence suggests that while IPS confers no clinical advantages over routine care, it does improve the proportion of people returning to employment. The objective of the current study is to investigate the net benefit of introducing IPS services into current mental health services in Australia. Method: The net benefit of IPS is assessed from a health sector perspective using cost-benefit analysis. A two-stage approach is taken to the assessment of benefit. The first stage involves a quantitative analysis of the net benefit, defined as the benefits of IPS (comprising transfer payments averted, income tax accrued and individual income earned) minus the costs. The second stage involves application of 'second-filter' criteria (including equity, strength of evidence, feasibility and acceptability to stakeholders) to results. The robustness of results is tested using the multivariate probabilistic sensitivity analysis. Results: The costs of IPS are $A10.3M (95% uncertainty interval $A7.4M-$A13.6M), the benefits are $A4.7M ($A3.1M-$A6.5M), resulting in a negative net benefit of $A5.6M ($A8.4M-$A3.4M). Conclusions: The current analysis suggests that IPS costs are greater than the monetary benefits. However, the evidence-base of the current analysis is weak. Structural conditions surrounding welfare payments in Australia create disincentives to full-time employment for people with disabilities.
Resumo:
OBJECTIVE - To assess the performance of health systems using diabetes as a tracer condition. RESEARCH DESIGN AND METHODS - We generated a measure of case-fatality among young people with diabetes Using the mortalily-to-incidence ratio (M/I ratio) for 29 industrialized countries using published data on diabetes incidence and mortality. Standardized incidence rates for ages 0-14 years were extracted from the World Health Organization DiaMond Study for the period 1990-1994; data on death from diabetes for ages 0-39 years were obtained from the World Health Organization Mortality database and converted into age-standardized death rates for the period 1994-1998, using the European standard population. RESULTS - The MA ratio varied > 10-fold. These relative differences appear similar to those observed in cohort studies of mortality among young people with type I diabetes in five countries. A sensitivity analysis showed that using plausible assumptions about potential overestimation of diabetes as a cause of death and underestimation of incidence rates in the U.S. yields an M/I ratio that would still be twice as high as in the U.K. or Canada. CONCLUSIONS - The M/I ratio for diabetes provides a means of differentiating countries on quality of care for people with diabetes. It is solely an indicator of potential problems, a basis for Stimulating more detailed assessments of whether such problems exist, and what can be done to address them.
Resumo:
Kalman inverse filtering is used to develop a methodology for real-time estimation of forces acting at the interface between tyre and road on large off-highway mining trucks. The system model formulated is capable of estimating the three components of tyre-force at each wheel of the truck using a practical set of measurements and inputs. Good tracking is obtained by the estimated tyre-forces when compared with those simulated by an ADAMS virtual-truck model. A sensitivity analysis determines the susceptibility of the tyre-force estimates to uncertainties in the truck's parameters.
Resumo:
The present investigation aimed to critically examine the factor structure and psychometric properties of the Anxiety Sensitivity Index - Revised (ASI-R). Confirmatory factor analysis using a clinical sample of adults (N = 248) revealed that the ASI-R could be improved substantially through the removal of 15 problematic items in order to account for the most robust dimensions of anxiety sensitivity. This modified scale was renamed the 21-item Anxiety Sensitivity Index (21-item ASI) and reanalyzed with a large sample of normative adults (N = 435), revealing configural and metric invariance across groups. Further comparisons with other alternative models, using multi-sample analysis, indicated the 21-item ASI to be the best fitting model for both groups. There was also evidence of internal consistency, test-retest reliability, and construct validity for both samples suggesting that the 21-item ASI is a useful assessment device for investigating the construct of anxiety sensitivity in both clinical and normative populations.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.