889 resultados para Forecast error variance
Resumo:
Individual differences in the variance of event-related potential (ERP) slow wave (SW) measures were examined. SW was recorded at prefrontal and parietal sites during memory and sensory trials of a delayed-response task in 391 adolescent twin pairs. Familial resemblance was identified and there was a strong suggestion of genetic influence. A common genetic factor influencing memory and sensory SW was identified at the prefrontal site (accounting for an estimated 35%-37% of the reliable variance) and at the parietal site (51%-52% of the reliable variance). Remaining reliable variance was influenced by unique environmental factors. Measurement error accounted for 24% to 30% of the total variance of each variable. The results show genetic independence for recording site, but not trial type, and suggest that the genetic factors identified relate more directly to brain structures, as defined by the cognitive functions they support, than to the cognitive networks that link them.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
In population pharmacokinetic studies, the precision of parameter estimates is dependent on the population design. Methods based on the Fisher information matrix have been developed and extended to population studies to evaluate and optimize designs. In this paper we propose simple programming tools to evaluate population pharmacokinetic designs. This involved the development of an expression for the Fisher information matrix for nonlinear mixed-effects models, including estimation of the variance of the residual error. We implemented this expression as a generic function for two software applications: S-PLUS and MATLAB. The evaluation of population designs based on two pharmacokinetic examples from the literature is shown to illustrate the efficiency and the simplicity of this theoretic approach. Although no optimization method of the design is provided, these functions can be used to select and compare population designs among a large set of possible designs, avoiding a lot of simulations.
Resumo:
The repeatability of initial values and rate of change of EMG signal mean spectral frequency (MNF), average rectified values (ARV), muscle fiber conduction velocity (CV) and maximal voluntary contraction (MVC) was investigated in the vastus medialis obliquus (VMO) and vastus lateralis (VL) muscles of both legs of nine healthy male subjects during voluntary, isometric contractions sustained for 50 s at 50% MVC. The values of MVC were recorded for both legs three times on each day and for three subsequent days, while the EMG signals have been recorded twice a day for three subsequent days. The degree of repeatability was investigated using the Fisher test based upon the ANalysis Of VAriance (ANOVA), the Standard Error of the Mean (SEM) and the Intraclass Correlation Coefficient (ICC). Data collected showed a high level of repeatability of MVC measurement (normalized SEM from 1.1% to 6.4% of the mean). MNF and ARV initial values also showed a high level of repeatability (ICC > 70% for all muscles and legs except right VMO). At 50% MVC level no relevant pattern of fatigue was observed for the VMO and VL muscles, suggesting that other portions of the quadriceps might have contributed to the generated effort. These observations seem to suggest that in the investigation of muscles belonging to a multi-muscular group at submaximal level, the more selective electrically elicited contractions should be preferred to voluntary contractions. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
We introduce a model for the dynamics of a patchy population in a stochastic environment and derive a criterion for its persistence. This criterion is based on the geometric mean (GM) through time of the spatial-arithmetic mean of growth rates. For the population to persist, the GM has to be greater than or equal to1. The GM increases with the number of patches (because the sampling error is reduced) and decreases with both the variance and the spatial covariance of growth rates. We derive analytical expressions for the minimum number of patches (and the maximum harvesting rate) required for the persistence of the population. As the magnitude of environmental fluctuations increases, the number of patches required for persistence increases, and the fraction of individuals that can be harvested decreases. The novelty of our approach is that we focus on Malthusian local population dynamics with high dispersal and strong environmental variability from year to year. Unlike previous models of patchy populations that assume an infinite number of patches, we focus specifically on the effect that the number of patches has on population persistence. Our work is therefore directly relevant to patchily distributed organisms that are restricted to a small number of habitat patches.
Resumo:
Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
This article develops a weighted least squares version of Levene's test of homogeneity of variance for a general design, available both for univariate and multivariate situations. When the design is balanced, the univariate and two common multivariate test statistics turn out to be proportional to the corresponding ordinary least squares test statistics obtained from an analysis of variance of the absolute values of the standardized mean-based residuals from the original analysis of the data. The constant of proportionality is simply a design-dependent multiplier (which does not necessarily tend to unity). Explicit results are presented for randomized block and Latin square designs and are illustrated for factorial treatment designs and split-plot experiments. The distribution of the univariate test statistic is close to a standard F-distribution, although it can be slightly underdispersed. For a complex design, the test assesses homogeneity of variance across blocks, treatments, or treatment factors and offers an objective interpretation of residual plot.
Resumo:
Latitudinal clines provide natural systems that may allow the effect of natural selection on the genetic variance to be determined. Ten clinal populations of Drosophila serrata collected from the eastern coast of Australia were used to examine clinal patterns in the trait mean and genetic variance of the life-history trait egg-to-adult development time. Development time significantly lengthened from tropical areas to temperate areas. The additive genetic variance for development time in each population was not associated with latitude but was associated with the population mean development time. Additive genetic variance tended to be larger in populations with more extreme development times and appeared to be consistent with allele frequency change. In contrast, the nonadditive genetic variance was not associated with the population mean but was associated with latitude. Levels of nonadditive genetic variance were greatest in the region of the cline where the gradient in the change in mean was greatest, consistent with Barton's (1999) conjecture that the generation of linkage disequilibrium may become an important component of the genetic variance in systems with a spatially varying optimum.
Resumo:
Regional commodity forecasts are being used increasingly in agricultural industries to enhance their risk management and decision-making processes. These commodity forecasts are probabilistic in nature and are often integrated with a seasonal climate forecast system. The climate forecast system is based on a subset of analogue years drawn from the full climatological distribution. In this study we sought to measure forecast quality for such an integrated system. We investigated the quality of a commodity (i.e. wheat and sugar) forecast based on a subset of analogue years in relation to a standard reference forecast based on the full climatological set. We derived three key dimensions of forecast quality for such probabilistic forecasts: reliability, distribution shift, and change in dispersion. A measure of reliability was required to ensure no bias in the forecast distribution. This was assessed via the slope of the reliability plot, which was derived from examination of probability levels of forecasts and associated frequencies of realizations. The other two dimensions related to changes in features of the forecast distribution relative to the reference distribution. The relationship of 13 published accuracy/skill measures to these dimensions of forecast quality was assessed using principal component analysis in case studies of commodity forecasting using seasonal climate forecasting for the wheat and sugar industries in Australia. There were two orthogonal dimensions of forecast quality: one associated with distribution shift relative to the reference distribution and the other associated with relative distribution dispersion. Although the conventional quality measures aligned with these dimensions, none measured both adequately. We conclude that a multi-dimensional approach to assessment of forecast quality is required and that simple measures of reliability, distribution shift, and change in dispersion provide a means for such assessment. The analysis presented was also relevant to measuring quality of probabilistic seasonal climate forecasting systems. The importance of retaining a focus on the probabilistic nature of the forecast and avoiding simplifying, but erroneous, distortions was discussed in relation to applying this new forecast quality assessment paradigm to seasonal climate forecasts. Copyright (K) 2003 Royal Meteorological Society.
Resumo:
Background and aims: Hip fracture is a devastating event in terms of outcome in the elderly, and the best predictor of hip fracture risk is hip bone density, usually measured by dual X-ray absorptiometry (DXA). However, bone density can also be ascertained from computerized tomography (CT) scans, and mid-thigh scans are frequently employed to assess the muscle and fat composition of the lower limb. Therefore, we examined if it was possible to predict hip bone density using mid-femoral bone density. Methods: Subjects were 803 ambulatory white and black women and men, aged 70-79 years, participating in the Health, Aging and Body Composition (Health ABC) Study. Bone mineral content (BMC, g) and volumetric bone mineral density (vBMD, mg/cm(3)) of the mid-femur were obtained by CT, whereas BMC and areal bone mineral density (aBMD, g/cm(2)) of the hip (femoral neck and trochanter) were derived from DXA. Results: In regression analyses stratified by race and sex, the coefficient of determination was low with mid-femoral BMC, explaining 6-27% of the variance in hip BMC, with a standard error of estimate (SEE) ranging from 16 to 22% of the mean. For mid-femur vBMD, the variance explained in hip aBMD was 2-17% with a SEE ranging from 15 to 18%. Adjusting aBMD to approximate volumetric density did not improve the relationships. In addition, the utility of fracture prediction was examined. Forty-eight subjects had one or more fractures (various sites) during a mean follow-up of 4.07 years. In logistic regression analysis, there was no association between mid-femoral vBMD and fracture (all fractures), whereas a 1 SD increase in hip BMD was associated with reduced odds for fracture of similar to60%. Conclusions: These results do not support the use of CT-derived mid-femoral vBMD or BMC to predict DXA-measured hip bone mineral status, irrespective of race or sex in older adults. Further, in contrast to femoral neck and trochanter BMD, mid-femur vBMD was not able to predict fracture (all fractures). (C) 2003, Editrice Kurtis.
Resumo:
O soro de leite é um subproduto da fabricação do queijo, seja por acidificação ou por processo enzimático. Em condições ideais, a caseína do leite se agrega formando um gel, que posteriormente cortado, induz a separação e liberação do soro. É utilizado de diversas formas em toda a indústria alimentícia, possui rica composição em lactose, sais minerais e proteínas. A desidratação é um dos principais processos utilizados para beneficiamento e transformação do soro. Diante disto, o objetivo deste trabalho foi avaliar a influência dos métodos de secagem: liofilização, leito de espuma (nas temperaturas de 40, 50, 60, 70 e 80ºC) e spray-dryer (nas temperaturas de 55, 60, 65, 70 e 75ºC), sobre as características de umidade, proteína, cor e solubilidade do soro, bem como estudar o seu processo de secagem. O soro foi obtido e desidratado após concentração por osmose reversa, testando 11 tratamentos, em 3 repetições, utilizando um delineamento inteiramente casualizado. Os resultados demonstraram que o modelo matemático que melhor se ajustou foi o modelo de Page, apresentado um coeficiente de determinação ajustado acima de 0,98 e erro padrão da regressão em todas as temperaturas abaixo de 0,04 para o método por leito de espuma. Para o método de liofilização os respectivos valores foram 0,9975 e 0,01612. A partir disso, pode-se elaborar um modelo matemático generalizado, apresentando um coeficiente de determinação igual a 0,9888. No caso do leito de espuma, observou-se que à medida que se aumenta a temperatura do ar de secagem, o tempo de secagem diminui e os valores do coeficiente de difusão efetiva aumentam. Porém, a redução no tempo de secagem entre os intervalos de temperatura, diminui com o aumento da mesma. A energia de ativação para a difusão no processo de secagem do soro foi de 26,650 kJ/mol e para todas as avaliações físico-químicas e tecnológicas, a análise de variância apresentou um valor de F significativo (p<0,05), indicando que há pelo menos um contraste entre as médias dos tratamentos que é significativo.