970 resultados para variance ration method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an innovative optimized parametric method for construction of prediction intervals (PIs) for uncertainty quantification. The mean-variance estimation (MVE) method employs two separate neural network (NN) models to estimate the mean and variance of targets. A new training method is developed in this study that adjusts parameters of NN models through minimization of a PI-based cost functions. A simulated annealing method is applied for minimization of the nonlinear non-differentiable cost function. The performance of the proposed method for PI construction is examined using monthly data sets taken from a wind farm in Australia. PIs for the wind farm power generation are constructed with five confidence levels between 50% and 90%. Demonstrated results indicate that valid PIs constructed using the optimized MVE method have a quality much better than the traditional MVE-based PIs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical optimized technique for rapid development of reliable prediction intervals (PIs) is presented in this study. The mean-variance estimation (MVE) technique is employed here for quantification of uncertainties related with wind power predictions. In this method, two separate neural network models are used for estimation of wind power generation and its variance. A novel PI-based training algorithm is also presented to enhance the performance of the MVE method and improve the quality of PIs. For an in-depth analysis, comprehensive experiments are conducted with seasonal datasets taken from three geographically dispersed wind farms in Australia. Five confidence levels of PIs are between 50% and 90%. Obtained results show while both traditional and optimized PIs are hypothetically valid, the optimized PIs are much more informative than the traditional MVE PIs. The informativeness of these PIs paves the way for their application in trouble-free operation and smooth integration of wind farms into energy systems. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying and comparing different steady states is an important task for clinical decision making. Data from unequal sources, comprising diverse patient status information, have to be interpreted. In order to compare results an expressive representation is the key. In this contribution we suggest a criterion to calculate a context-sensitive value based on variance analysis and discuss its advantages and limitations referring to a clinical data example obtained during anesthesia. Different drug plasma target levels of the anesthetic propofol were preset to reach and maintain clinically desirable steady state conditions with target controlled infusion (TCI). At the same time systolic blood pressure was monitored, depth of anesthesia was recorded using the bispectral index (BIS) and propofol plasma concentrations were determined in venous blood samples. The presented analysis of variance (ANOVA) is used to quantify how accurately steady states can be monitored and compared using the three methods of measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a meta-analysis-based technique to estimate the effect of common method variance on the validity of individual theories. The technique explains between-study variance in observed correlations as a function of the susceptibility to common method variance of the methods employed in individual studies. The technique extends to mono-method studies the concept of method variability underpinning the classic multitrait-multimethod technique. The application of the technique is demonstrated by analyzing the effect of common method variance on the observed correlations between perceived usefulness and usage in the technology acceptance model literature. Implications of the technique and the findings for future research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

近年来,种群空间分布格局日益为生态学家所重视,已成为生态学发展最快的领域和生态学理论发展的核心之一,群落的植被盖度和生物多样性是生态学研究的常用指标。植物种群分布格局及群落特征是种群和群落对环境条件长期适应和选择的结果,一方面决定于植物自身的生物学特性,另一方面与种群和群落分布的生境密切相关,对于揭示植被对环境的适应规律及它们之间的相互关系具有十分重要的意义。 本文以油蒿种群为研究对象,沿着鄂尔多斯高原从东至西的降水梯度(336~249mm),应用传统的分布格局检验法以及点格局方法进行油蒿种群分布格局的研究;采用Gleason 指数、Shannon-Wiener 指数、Pielou 指数和Simpson 指数分析比较油蒿群落的生物多样性。从格局分析的尺度问题、严密性以及聚集程度变化趋势等几个方面,对本文采用的种群分布格局分析方法进行比较,为种群分布格局分析方法的选择提供参考。 种群分布格局分析结果表明沿着降水递减的梯度,在小尺度上油蒿种群分布格局表现为由均匀分布向随机分布转变的趋势;在大尺度上则表现为由随机分布向聚集分布转变的趋势。沿降水逐渐减弱的梯度上,油蒿种群的聚集程度逐渐增强。降水梯度对油蒿种群分布格局的影响一方面由油蒿本身的生物学特性决定,在降雨量小的地区,油蒿母株周围幼苗的存活率高,呈现聚集分布格局,而降雨量大的地区,油蒿幼苗存活概率比较平均,形成随机分布格局;其次,降雨量较大的地区,土壤水分资源较充足,油蒿个体较大,个体之间以竞争关系为主,聚集程度较低,降雨量少的地区,油蒿个体较矮小,个体之间为共同抵御恶劣生境,呈现聚集分布的格局。 群落FPC 和生物多样性指数与年平均降水量的回归分析结果显示,降水量越大,植被盖度越高,物种丰富度越大,群落物种分布均匀程度和优势度越低。充足的降雨促进油蒿群落的发育,草本层植物长势更好,生物量增加,群落的结构趋于复杂。 传统的分布格局检验方法和点格局方法在油蒿种群分布格局分析应用中得到的结果具有高度一致性,然而在实际工作中,这些方法之间具有各自的优劣和适应性。 建议在选取种群分布格局的分析方法时,要充分考虑研究目的,根据具体的物种和实验环境确定采用的方法。需在各细微尺度上做种群分布格局分析时,点格局方法优于传统的分析方法;在大范围取样及对工作效率要求较高时,方差均值比率法和聚集强度指数法更适合。由于聚集强度指数法在进行结果判定时比较模糊,建议优先选择方差均值比率法,将聚集强度指数法作为参考。降雨的减少显著改变了植物种群的空间分布格局和群落结构及物种组成,在进行生态恢复时可以参照本文的分析结果进行恢复植被的合理配置。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis of the variability in the responses of large structural systems and quantification of their linearity or nonlinearity as a potential non-invasive means of structural system assessment from output-only condition remains a challenging problem. In this study, the Delay Vector Variance (DVV) method is used for full scale testing of both pseudo-dynamic and dynamic responses of two bridges, in order to study the degree of nonlinearity of their measured response signals. The DVV detects the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. The pseudo-dynamic data is obtained from a concrete bridge during repair while the dynamic data is obtained from a steel railway bridge traversed by a train. We show that DVV is promising as a marker in establishing the degree to which a change in the signal nonlinearity reflects the change in the real behaviour of a structure. It is also useful in establishing the sensitivity of instruments or sensors deployed to monitor such changes. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to analyse the interdependencies of the house price growth rates in Australian capital cities.
Design/methodology/approach - A vector autoregression model and variance decomposition are introduced to estimate and interpret the interdependences among the growth rates of regional house prices in Australia.
Findings - The results suggest the eight capital cities can be divided into three groups: Sydney and Melbourne; Canberra, Adelaide and Brisbane; and Hobart, Perth and Darwin.
Originality/value - Based on the structural vector autoregression model, this research develops an innovative interdependence analysis approach of regional house prices based on a variance decomposition method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neural network (NN) is a popular artificial intelligence technique for solving complicated problems due to their inherent capabilities. However generalization in NN can be harmed by a number of factors including parameter's initialization, inappropriate network topology and setting parameters of the training process itself. Forecast combinations of NN models have the potential for improved generalization and lower training time. A weighted averaging based on Variance-Covariance method that assigns greater weight to the forecasts producing lower error, instead of equal weights is practiced in this paper. While implementing the method, combination of forecasts is done with all candidate models in one experiment and with the best selected models in another experiment. It is observed during the empirical analysis that forecasting accuracy is improved by combining the best individual NN models. Another finding of this study is that reducing the number of NN models increases the diversity and, hence, accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the biomedical studies, the general data structures have been the matched (paired) and unmatched designs. Recently, many researchers are interested in Meta-Analysis to obtain a better understanding from several clinical data of a medical treatment. The hybrid design, which is combined two data structures, may create the fundamental question for statistical methods and the challenges for statistical inferences. The applied methods are depending on the underlying distribution. If the outcomes are normally distributed, we would use the classic paired and two independent sample T-tests on the matched and unmatched cases. If not, we can apply Wilcoxon signed rank and rank sum test on each case. ^ To assess an overall treatment effect on a hybrid design, we can apply the inverse variance weight method used in Meta-Analysis. On the nonparametric case, we can use a test statistic which is combined on two Wilcoxon test statistics. However, these two test statistics are not in same scale. We propose the Hybrid Test Statistic based on the Hodges-Lehmann estimates of the treatment effects, which are medians in the same scale.^ To compare the proposed method, we use the classic meta-analysis T-test statistic on the combined the estimates of the treatment effects from two T-test statistics. Theoretically, the efficiency of two unbiased estimators of a parameter is the ratio of their variances. With the concept of Asymptotic Relative Efficiency (ARE) developed by Pitman, we show ARE of the hybrid test statistic relative to classic meta-analysis T-test statistic using the Hodges-Lemann estimators associated with two test statistics.^ From several simulation studies, we calculate the empirical type I error rate and power of the test statistics. The proposed statistic would provide effective tool to evaluate and understand the treatment effect in various public health studies as well as clinical trials.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ENGLISH: The map method, the Jones method, the variance-covariance method, and the Skellam method were used to study the migrations of tagged yellowfin tuna released off the southern coast of Mexico in 1960 and 1969. The first three methods are all useful, and each presents information which is complementary to that presented by the others. The Skellam method, as used in this report, is less useful. The movements of the tagged fish released in 1960 appeared to have been strongly directed, but this was probably caused principally by the distribution of the fishing effort. The effort was much more widely distributed in 1970, and the movements of the fish released in 1969 appeared to have been much less directed. The correlation coefficients derived from the variance-covariance method showed that it was not random, however. The small fish released in the Acapulco and 10°N-100°W areas in 1969 migrated to the Manzanillo area near the beginning of February 1970. The medium and large fish released in the same areas in the same year tended to migrate to the southeast throughout the first half of 1970, however. SPANISH: El método de mapas, el de Jones, el de la variancia-covariancia y el de Skellam fueron empleados para estudiar las migraciones del atún aleta amarilla marcado y liberado frente a la costa meridional de México en 1960 y 1969. Los tres primeros métodos son todos útiles, y cada uno presenta información que complementa la presentada por los otros. El método de Skellam, conforme se usa en este informe, es menos útil. Parece que los desplazamientos de los peces marcados y liberados en 1960 hubieran sido fuertemente orientados, pero ésto probablemente fue causado principalmente por la distribución del esfuerzo de pesca. El esfuerzo se distribuyó más extensamente en 1970, y parece que los desplazamientos de los peces liberados en 1969 fueran menos orientados. Los coeficientes de correlación derivados del método variancia-covariancia indicaron, sin embargo, que no eran aleatorios. Los peces pequeños liberados en las áreas de Acapulco y los 10°N-100°W en 1969 migraron al área de Manzanillo a principios de febrero 1970. Los peces medianos y grandes liberados en las mismas áreas en el mismo año tuvieron, sin embargo, la tendencia a desplazarse al sudeste durante el primer semestre de 1970. (PDF contains 64 pages.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents a methods evaluation and intercalibration of active fluorescence-based measurements of the quantum yield ( inline image) and absorption coefficient ( inline image) of photosystem II (PSII) photochemistry. Measurements of inline image, inline image, and irradiance (E) can be scaled to derive photosynthetic electron transport rates ( inline image), the process that fuels phytoplankton carbon fixation and growth. Bio-optical estimates of inline image and inline image were evaluated using 10 phytoplankton cultures across different pigment groups with varying bio-optical absorption characteristics on six different fast-repetition rate fluorometers that span two different manufacturers and four different models. Culture measurements of inline image and the effective absorption cross section of PSII photochemistry ( inline image, a constituent of inline image) showed a high degree of correspondence across instruments, although some instrument-specific biases are identified. A range of approaches have been used in the literature to estimate inline image and are evaluated here. With the exception of ex situ inline image estimates from paired inline image and PSII reaction center concentration ( inline image) measurements, the accuracy and precision of in situ inline image methodologies are largely determined by the variance of method-specific coefficients. The accuracy and precision of these coefficients are evaluated, compared to literature data, and discussed within a framework of autonomous inline image measurements. This study supports the application of an instrument-specific calibration coefficient ( inline image) that scales minimum fluorescence in the dark ( inline image) to inline image as both the most accurate in situ measurement of inline image, and the methodology best suited for highly resolved autonomous inline image measurements.