946 resultados para Estimated parameter


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we examine an inverse heat convection problem of estimating unknown parameters of a parameterized variable boundary heat flux. The physical problem is a hydrodynamically developed, thermally developing, three-dimensional steady state laminar flow of a Newtonian fluid inside a circular sector duct, insulated in the flat walls and subject to unknown wall heat flux at the curved wall. Results are presented for polynomial and sinusoidal trial functions, and the unknown parameters as well as surface heat fluxes are determined. Depending on the nature of the flow, on the position of experimental points the inverse problem sometimes could not be solved. Therefore, an identification condition is defined to specify a condition under which the inverse problem can be solved. Once the parameters have been computed it is possible to obtain the statistical significance of the inverse problem solution. Therefore, approximate confidence bounds based on standard statistical linear procedure, for the estimated parameters, are analyzed and presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two experiments were conducted to develop and evaluate a model to estimate ME requirements and determine Gompertz growth parameters for broilers. The first experiment was conducted to determine maintenance energy requirements and the efficiencies of energy utilization for fat and protein deposition. Maintenance ME (ME m) requirements were estimated to be 157.8, 112.1, and 127.2 kcal of ME/kg 0.75 per day for broilers at 13, 23, and 32°C, respectively. Environmental temperature (T) had a quadratic effect on maintenance requirements (ME m = 307.87 - 15.63T + 0.3105T 2; r 2= 0.93). Energy requirements for fat and protein deposition were estimated to be 13.52 and 12.59 kcal of ME/g, respectively. Based on these coefficients, a model was developed to calculate daily ME requirements: ME = BW 0.75 (307.87 - 15.63T + 0.3105 T 2) + 13.52 G f + 12.59 G p. This model considers live BW, the effects of environmental temperature, and fractional fat (G f) and protein (G p) deposition. The second experiment was carried out to estimate the growth parameters of Ross broilers and to collect data to evaluate the ME requirement model proposed. Live BW, empty feather-free carcass, weight of the feathers, and carcass chemical compositions were analyzed until 16 wk of age. Parameters of Gompertz curves for each component were estimated. Males had higher growth potential and higher capacity to deposit nutrients than females, except for fat deposition. Data of BW and body composition collected in this experiment were fitted into the energy model proposed herein and the equations described by Emmans (1989) and Chwalibog (1991). The daily ME requirements estimated by the model determined in this study were closer to the ME intake observed in this trial compared with other models. ©2005 Poultry Science Association, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phenotypic data from female Canchim beef cattle were used to obtain estimates of genetic parameters for reproduction and growth traits using a linear animal mixed model. In addition, relationships among animal estimated breeding values (EBVs) for these traits were explored using principal component analysis. The traits studied in female Canchim cattle were age at first calving (AFC), age at second calving (ASC), calving interval (CI), and bodyweight at 420 days of age (BW420). The heritability estimates for AFC, ASC, CI and BW420 were 0.03±0.01, 0.07±0.01, 0.06±0.02, and 0.24±0.02, respectively. The genetic correlations for AFC with ASC, AFC with CI, AFC with BW420, ASC with CI, ASC with BW420, and CI with BW420 were 0.87±0.07, 0.23±0.02, -0.15±0.01, 0.67±0.13, -0.07±0.13, and 0.02±0.14, respectively. Standardised EBVs for AFC, ASC and CI exhibited a high association with the first principal component, whereas the standardised EBV for BW420 was closely associated with the second principal component. The heritability estimates for AFC, ASC and CI suggest that these traits would respond slowly to selection. However, selection response could be enhanced by constructing selection indices based on the principal components. © CSIRO 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective population size is an important parameter for the assessment of genetic diversity within a livestock population and its development over time. If pedigree information is not available, linkage disequilibrium (LD) analysis might offer an alternative perspective for the estimation of effective population size. In this study, 128 individuals of the Swiss Eringer breed were genotyped using the Illumina BovineSNP50 beadchip. We set bin size at 50 kb for LD analysis, assuming that LD for proximal single nucleotide polymorphism (SNP)-pairs reflects distant breeding history while LD from distal SNP-pairs would reflect near history. Recombination rates varied among different regions of the genome. The use of physical distances as an approximation of genetic distances (e.g. setting 1 Mb = 0.01 Morgan) led to an upward bias in LD-based estimates of effective population size for generations beyond 50, while estimates for recent history were unaffected. Correction for restricted sample size did not substantially affect these results. LD-based actual effective population size was estimated in the range of 87-149, whereas pedigree-based effective population size resulted in 321 individuals. For conservation purposes, requiring knowledge of recent history (<50 generations), approximation assuming constant recombination rate seemed adequate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under a two-level hierarchical model, suppose that the distribution of the random parameter is known or can be estimated well. Data are generated via a fixed, but unobservable realization of this parameter. In this paper, we derive the smallest confidence region of the random parameter under a joint Bayesian/frequentist paradigm. On average this optimal region can be much smaller than the corresponding Bayesian highest posterior density region. The new estimation procedure is appealing when one deals with data generated under a highly parallel structure, for example, data from a trial with a large number of clinical centers involved or genome-wide gene-expession data for estimating individual gene- or center-specific parameters simultaneously. The new proposal is illustrated with a typical microarray data set and its performance is examined via a small simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time series of geocenter coordinates were determined with data of two global navigation satellite systems (GNSSs), namely the U.S. GPS (Global Positioning System) and the Russian GLONASS (Global’naya Nawigatsionnaya Sputnikowaya Sistema). The data was recorded in the years 2008–2011 by a global network of 92 permanently observing GPS/GLONASS receivers. Two types of daily solutions were generated independently for each GNSS, one including the estimation of geocenter coordinates and one without these parameters. A fair agreement for GPS and GLONASS was found in the geocenter x- and y-coordinate series. Our tests, however, clearly reveal artifacts in the z-component determined with the GLONASS data. Large periodic excursions in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system and thus have a period of about 4 months (third of a year). A detailed analysis revealed that the artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). A simple formula is derived, describing the relation between the geocenter z-coordinate and the corresponding parameter of the SRP. The effect can be explained by first-order perturbation theory of celestial mechanics. The theory also predicts a heavy impact on the GNSS-derived geocenter if once-per-revolution SRP parameters are estimated in the direction of the satellite’s solar panel axis. Specific experiments using GPS observations revealed that this is indeed the case. Although the main focus of this article is on GNSS, the theory developed is applicable to all satellite observing techniques. We applied the theory to satellite laser ranging (SLR) solutions using LAGEOS. It turns out that the correlation between geocenter and SRP parameters is not a critical issue for the SLR solutions. The reasons are threefold: The direct SRP is about a factor of 30–40 smaller for typical geodetic SLR satellites than for GNSS satellites, allowing it in most cases to not solve for SRP parameters (ruling out the correlation between these parameters and the geocenter coordinates); the orbital arc length of 7 days (which is typically used in SLR analysis) contains more than 50 revolutions of the LAGEOS satellites as compared to about two revolutions of GNSS satellites for the daily arcs used in GNSS analysis; the orbit geometry is not as critical for LAGEOS as for GNSS satellites, because the elevation angle of the Sun w.r.t. the orbital plane is usually significantly changing over 7 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We combine phytoplankton occurrence data for 119 species from the continuous plankton recorder with climatological environmental variables in the North Atlantic to obtain ecological response functions of each species using the MaxEnt statistical method. These response functions describe how the probability of occurrence of each species changes as a function of environmental conditions and can be reduced to a simple description of phytoplankton realized niches using the mean and standard deviation of each environmental variable, weighted by its response function. Although there was substantial variation in the realized niche among species within groups, the envelope of the realized niches of North Atlantic diatoms and dinoflagellates are mostly separate in niche space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06