914 resultados para estimation and filtering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Common diseases such as endometriosis (ED), Alzheimer's disease (AD) and multiple sclerosis (MS) account for a significant proportion of the health care burden in many countries. Genome-wide association studies (GWASs) for these diseases have identified a number of individual genetic variants contributing to the risk of those diseases. However, the effect size for most variants is small and collectively the known variants explain only a small proportion of the estimated heritability. We used a linear mixed model to fit all single nucleotide polymorphisms (SNPs) simultaneously, and estimated genetic variances on the liability scale using SNPs from GWASs in unrelated individuals for these three diseases. For each of the three diseases, case and control samples were not all genotyped in the same laboratory. We demonstrate that a careful analysis can obtain robust estimates, but also that insufficient quality control (QC) of SNPs can lead to spurious results and that too stringent QC is likely to remove real genetic signals. Our estimates show that common SNPs on commercially available genotyping chips capture significant variation contributing to liability for all three diseases. The estimated proportion of total variation tagged by all SNPs was 0.26 (SE 0.04) for ED, 0.24 (SE 0.03) for AD and 0.30 (SE 0.03) for MS. Further, we partitioned the genetic variance explained into five categories by a minor allele frequency (MAF), by chromosomes and gene annotation. We provide strong evidence that a substantial proportion of variation in liability is explained by common SNPs, and thereby give insights into the genetic architecture of the diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61º50' N, 24º22' E). In particular, a comparison of the results of different estimation methods was conducted to assess the reliability and suitability of their applications. For the trees in Mature-Stand, annual stem biomass increment fluctuated following a sigmoid equation, and the fitting curves reached a maximum level (from about 1 kg/yr for understorey spruce to 7 kg/yr for dominant pine) when the trees were 100 years old. Tree biomass was estimated to be about 70 Mg/ha in Young-Stand and about 220 Mg/ha in Mature-Stand. In the region (58.00-62.13 ºN, 14-34 ºE, ≤ 300 m a.s.l.) surrounding the study stands, the tree biomass accumulation in Norway spruce and Scots pine stands followed a sigmoid equation with stand age, with a maximum of 230 Mg/ha at the age of 140 years. In Mature-Stand, lichen biomass on the trees was 1.63 Mg/ha with more than half of the biomass occurring on dead branches, and the standing crop of litter lichen on the ground was about 0.09 Mg/ha. There were substantial differences among the results estimated by different methods in the stands. These results imply that a possible estimation error should be taken into account when calculating tree biomass in a stand with an indirect approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By deriving the equations for an error analysis of modeling inaccuracies for the combined estimation and control problem, it is shown that the optimum estimation error is orthogonal to the actual suboptimum estimate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retrospective identification of fire severity can improve our understanding of fire behaviour and ecological responses. However, burnt area records for many ecosystems are non-existent or incomplete, and those that are documented rarely include fire severity data. Retrospective analysis using satellite remote sensing data captured over extended periods can provide better estimates of fire history. This study aimed to assess the relationship between the Landsat differenced normalised burn ratio (dNBR) and field measured geometrically structured composite burn index (GeoCBI) for retrospective analysis of fire severity over a 23 year period in sclerophyll woodland and heath ecosystems. Further, we assessed for reduced dNBR fire severity classification accuracies associated with vegetation regrowth at increasing time between ignition and image capture. This was achieved by assessing four Landsat images captured at increasing time since ignition of the most recent burnt area. We found significant linear GeoCBI–dNBR relationships (R2 = 0.81 and 0.71) for data collected across ecosystems and for Eucalyptus racemosa ecosystems, respectively. Non-significant and weak linear relationships were observed for heath and Melaleuca quinquenervia ecosystems, suggesting that GeoCBI–dNBR was not appropriate for fire severity classification in specific ecosystems. Therefore, retrospective fire severity was classified across ecosystems. Landsat images captured within ~ 30 days after fire events were minimally affected by post burn vegetation regrowth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By deriving the equations for an error analysis of modeling inaccuracies for the combined estimation and control problem, it is shown that the optimum estimation error is orthogonal to the actual suboptimum estimate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By deriving the equations for an error analysis of modeling inaccuracies for the combined estimation and control problem, it is shown that the optimum estimation error is orthogonal to the actual suboptimum estimate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comparatively simple and rapid method for the identification, estimation and preparation of fatty acids has been developed, using reversed phase circular paper chromatography. The method is also suitable for the analysis of “Critical Pairs” of fatty acids and for the preparation of fatty acids. Further, when used at a higher temperature, the method is more sensitive in revealing the presence of even traces of higher fatty acids in the seeds of Adenanthera pavonina.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study contributes to the neglect effect literature by looking at the relative trading volume in terms of value. The results for the Swedish market show a significant positive relationship between the accuracy of estimation and the relative trading volume. Market capitalisation and analyst coverage have in prior studies been used as proxies for neglect. These measures however, do not take into account the effort analysts put in when estimating corporate pre-tax profits. I also find evidence that the industry of the firm influence the accuracy of estimation. In addition, supporting earlier findings, loss making firms are associated with larger forecasting errors. Further, I find that the average forecast error increased in the year 2000 – in Sweden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An efficient strategy for identification of delamination in composite beams and connected structures is presented. A spectral finite-element model consisting of a damaged spectral element is used for model-based prediction of the damaged structural response in the frequency domain. A genetic algorithm (GA) specially tailored for damage identification is derived and is integrated with finite-element code for automation. For best application of the GA, sensitivities of various objective functions with respect to delamination parameters are studied and important conclusions are presented. Model-based simulations of increasing complexity illustrate some of the attractive features of the strategy in terms of accuracy as well as computational cost. This shows the possibility of using such strategies for the development of smart structural health monitoring softwares and systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extended Kalman filter based generalized state estimation approach is presented in this paper for accurately estimating the states of incoming high-speed targets such as ballistic missiles. A key advantage of this nine-state problem formulation is that it is very much generic and can capture spiraling as well as pure ballistic motion of targets without any change of the target model and the tuning parameters. A new nonlinear model predictive zero-effort-miss based guidance algorithm is also presented in this paper, in which both the zero-effort-miss as well as the time-to-go are predicted more accurately by first propagating the nonlinear target model (with estimated states) and zero-effort interceptor model simultaneously. This information is then used for computing the necessary lateral acceleration. Extensive six-degrees-of-freedom simulation experiments, which include noisy seeker measurements, a nonlinear dynamic inversion based autopilot for the interceptor along with appropriate actuator and sensor models and magnitude and rate saturation limits for the fin deflections, show that near-zero miss distance (i.e., hit-to-kill level performance) can be obtained when these two new techniques are applied together. Comparison studies with an augmented proportional navigation based guidance shows that the proposed model predictive guidance leads to a substantial amount of conservation in the control energy as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Periodic estimation, monitoring and reporting on area under forest and plantation types and afforestation rates are critical to forest and biodiversity conservation, sustainable forest management and for meeting international commitments. This article is aimed at assessing the adequacy of the current monitoring and reporting approach adopted in India in the context of new challenges of conservation and reporting to international conventions and agencies. The analysis shows that the current mode of monitoring and reporting of forest area is inadequate to meet the national and international requirements. India could be potentially over-reporting the area under forests by including many non-forest tree categories such as commercial plantations of coconut, cashew, coffee and rubber, and fruit orchards. India may also be under-reporting deforestation by reporting only gross forest area at the state and national levels. There is a need for monitoring and reporting of forest cover, deforestation and afforestation rates according to categories such as (i) natural/primary forest, (ii) secondary/degraded forests, (iii) forest plantations, (iv) commercial plantations, (v) fruit orchards and (vi) scattered trees.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acoustic feature based speech (syllable) rate estimation and syllable nuclei detection are important problems in automatic speech recognition (ASR), computer assisted language learning (CALL) and fluency analysis. A typical solution for both the problems consists of two stages. The first stage involves computing a short-time feature contour such that most of the peaks of the contour correspond to the syllabic nuclei. In the second stage, the peaks corresponding to the syllable nuclei are detected. In this work, instead of the peak detection, we perform a mode-shape classification, which is formulated as a supervised binary classification problem - mode-shapes representing the syllabic nuclei as one class and remaining as the other. We use the temporal correlation and selected sub-band correlation (TCSSBC) feature contour and the mode-shapes in the TCSSBC feature contour are converted into a set of feature vectors using an interpolation technique. A support vector machine classifier is used for the classification. Experiments are performed separately using Switchboard, TIMIT and CTIMIT corpora in a five-fold cross validation setup. The average correlation coefficients for the syllable rate estimation turn out to be 0.6761, 0.6928 and 0.3604 for three corpora respectively, which outperform those obtained by the best of the existing peak detection techniques. Similarly, the average F-scores (syllable level) for the syllable nuclei detection are 0.8917, 0.8200 and 0.7637 for three corpora respectively. (C) 2016 Elsevier B.V. All rights reserved.