959 resultados para Exponential load model
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this paper, a new family of survival distributions is presented. It is derived by considering that the latent number of failure causes follows a Poisson distribution and the time for these causes to be activated follows an exponential distribution. Three different activation schemes are also considered. Moreover, we propose the inclusion of covariates in the model formulation in order to study their effect on the expected value of the number of causes and on the failure rate function. Inferential procedure based on the maximum likelihood method is discussed and evaluated via simulation. The developed methodology is illustrated on a real data set on ovarian cancer.
Resumo:
In this paper, to solve the reconfiguration problem of radial distribution systems a scatter search, which is a metaheuristic-based algorithm, is proposed. In the codification process of this algorithm a structure called node-depth representation is used. It then, via the operators and from the electrical power system point of view, results finding only radial topologies. In order to show the effectiveness, usefulness, and the efficiency of the proposed method, a commonly used test system, 135-bus, and a practical system, a part of Sao Paulo state's distribution network, 7052 bus, are conducted. Results confirm the efficiency of the proposed algorithm that can find high quality solutions satisfying all the physical and operational constraints of the problem.
Resumo:
Electricity short-term load forecast is very important for the operation of power systems. In this work a classical exponential smoothing model, the Holt-Winters with double seasonality was used to test for accurate predictions applied to the Portuguese demand time series. Some metaheuristic algorithms for the optimal selection of the smoothing parameters of the Holt-Winters forecast function were used and the results after testing in the time series showed little differences among methods, so the use of the simple local search algorithms is recommended as they are easier to implement.
Resumo:
Electricity short-term load forecast is very important for the operation of power systems. In this work a classical exponential smoothing model, the Holt-Winters with double seasonality was used to test for accurate predictions applied to the Portuguese demand time series. Some metaheuristic algorithms for the optimal selection of the smoothing parameters of the Holt-Winters forecast function were used and the results after testing in the time series showed little differences among methods, so the use of the simple local search algorithms is recommended as they are easier to implement.
Resumo:
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate, the complementary exponential geometric distribution, which is complementary to the exponential geometric model proposed by Adamidis and Loukas (1998). The new distribution arises on a latent complementary risks scenario, in which the lifetime associated with a particular risk is not observable; rather, we observe only the maximum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its reliability and failure rate functions, moments, including the mean and variance, variation coefficient, and modal value. The parameter estimation is based on the usual maximum likelihood approach. We report the results of a misspecification simulation study performed in order to assess the extent of misspecification errors when testing the exponential geometric distribution against our complementary one in the presence of different sample size and censoring percentage. The methodology is illustrated on four real datasets; we also make a comparison between both modeling approaches. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
PURPOSE: To determine whether a mono-, bi- or tri-exponential model best fits the intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) signal of normal livers. MATERIALS AND METHODS: The pilot and validation studies were conducted in 38 and 36 patients with normal livers, respectively. The DWI sequence was performed using single-shot echoplanar imaging with 11 (pilot study) and 16 (validation study) b values. In each study, data from all patients were used to model the IVIM signal of normal liver. Diffusion coefficients (Di ± standard deviations) and their fractions (fi ± standard deviations) were determined from each model. The models were compared using the extra sum-of-squares test and information criteria. RESULTS: The tri-exponential model provided a better fit than both the bi- and mono-exponential models. The tri-exponential IVIM model determined three diffusion compartments: a slow (D1 = 1.35 ± 0.03 × 10(-3) mm(2)/s; f1 = 72.7 ± 0.9 %), a fast (D2 = 26.50 ± 2.49 × 10(-3) mm(2)/s; f2 = 13.7 ± 0.6 %) and a very fast (D3 = 404.00 ± 43.7 × 10(-3) mm(2)/s; f3 = 13.5 ± 0.8 %) diffusion compartment [results from the validation study]. The very fast compartment contributed to the IVIM signal only for b values ≤15 s/mm(2) CONCLUSION: The tri-exponential model provided the best fit for IVIM signal decay in the liver over the 0-800 s/mm(2) range. In IVIM analysis of normal liver, a third very fast (pseudo)diffusion component might be relevant. KEY POINTS: ? For normal liver, tri-exponential IVIM model might be superior to bi-exponential ? A very fast compartment (D = 404.00 ± 43.7 × 10 (-3) mm (2) /s; f = 13.5 ± 0.8 %) is determined from the tri-exponential model ? The compartment contributes to the IVIM signal only for b ≤ 15 s/mm (2.)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Different codons encoding the same amino acid are not used equally in protein-coding sequences. In bacteria, there is a bias towards codons with high translation rates. This bias is most pronounced in highly expressed proteins, but a recent study of synthetic GFP-coding sequences did not find a correlation between codon usage and GFP expression, suggesting that such correlation in natural sequences is not a simple property of translational mechanisms. Here, we investigate the effect of evolutionary forces on codon usage. The relation between codon bias and protein abundance is quantitatively analyzed based on the hypothesis that codon bias evolved to ensure the efficient usage of ribosomes, a precious commodity for fast growing cells. An explicit fitness landscape is formulated based on bacterial growth laws to relate protein abundance and ribosomal load. The model leads to a quantitative relation between codon bias and protein abundance, which accounts for a substantial part of the observed bias for E. coli. Moreover, by providing an evolutionary link, the ribosome load model resolves the apparent conflict between the observed relation of protein abundance and codon bias in natural sequences and the lack of such dependence in a synthetic gfp library. Finally, we show that the relation between codon usage and protein abundance can be used to predict protein abundance from genomic sequence data alone without adjustable parameters.
Resumo:
Different codons encoding the same amino acid are not used equally in protein-coding sequences. In bacteria, there is a bias towards codons with high translation rates. This bias is most pronounced in highly expressed proteins, but a recent study of synthetic GFP-coding sequences did not find a correlation between codon usage and GFP expression, suggesting that such correlation in natural sequences is not a simple property of translational mechanisms. Here, we investigate the effect of evolutionary forces on codon usage. The relation between codon bias and protein abundance is quantitatively analyzed based on the hypothesis that codon bias evolved to ensure the efficient usage of ribosomes, a precious commodity for fast growing cells. An explicit fitness landscape is formulated based on bacterial growth laws to relate protein abundance and ribosomal load. The model leads to a quantitative relation between codon bias and protein abundance, which accounts for a substantial part of the observed bias for E. coli. Moreover, by providing an evolutionary link, the ribosome load model resolves the apparent conflict between the observed relation of protein abundance and codon bias in natural sequences and the lack of such dependence in a synthetic gfp library. Finally, we show that the relation between codon usage and protein abundance can be used to predict protein abundance from genomic sequence data alone without adjustable parameters.
Resumo:
We consider inference in randomized studies, in which repeatedly measured outcomes may be informatively missing due to drop out. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and posit an exponential tilt model that links non-identifiable and identifiable distributions. This model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated and applied to data from the Breast Cancer Prevention Trial.
Resumo:
Over a number of years, as the Higher Education Funding Council for England (HEFCE)'s funding models became more transparent, Aston University was able to discover how its funding for teaching and research was calculated. This enabled calculations to be made on the funds earned by each school in the University, and Aston Business School (ABS) in turn to develop models to calculate the funds earned by its programmes and academic groups. These models were a 'load' and a 'contribution' model. The 'load' model records the weighting of activities undertaken by individual members of staff; the 'contribution' model is the means by which funds are allocated to academic units. The 'contribution' model is informed by the 'load' model in determining the volume of activity for which each academic unit is to be funded.
Resumo:
This paper discusses the use of a Model developed by Aston Business School to record the work load of its academic staff. By developing a database to register annual activity in all areas of teaching, administration and research the School has created a flexible tool which can be used for facilitating both day-to-day managerial and longer term strategic decisions. This paper gives a brief outline of the Model and discusses the factors which were taken into account when setting it up. Particular attention is paid to the uses made of the Model and the problems encountered in developing it. The paper concludes with an appraisal of the Model’s impact and of additional developments which are currently being considered. Aston Business School has had a Load Model in some form for many years. The Model has, however, been refined over the past five years, so that it has developed into a form which can be used for a far greater number of purposes within the School. The Model is coordinated by a small group of academic and administrative staff, chaired by the Head of the School. This group is responsible for the annual cycle of collecting and inputting data, validating returns, carrying out analyses of the raw data, and presenting the mater ial to different sections of the School. The authors of this paper are members of this steer ing group.
Resumo:
A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.