946 resultados para Estimated parameter


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermal expansion of magnesium oxide has been measured below room temperature from 140°K to 284.5°K, using an interferometric method. The accuracy of measurement is better than 3% in the temperature range studied. The agreement of these results with Durand's is quite good, but consistently higher over most of the range by 2 or 3%, for the most part within the estimated experimental error. The Grüneisen parameter remains constant at about 1.51 over the present experimental range; but an isolated measurement of Durand at 85°K suggests that at lower temperatures it rises quite sharply above this value. This possibility is therefore investigated theoretically. With a non-central force model to represent MgO, γ(−3) and γ(2) are calculated and it is found that γ(−3) > γ(2), again suggesting that the Grüneisen parameter increases with falling temperature. Of the two reported experimental values for the infra-red absorption frequency, correlation with the heat capacity strongly indicates a wavelength of 25.26μm rather than 17.3μm. Thermal expansion measurements at still lower temperatures must be carried out to confirm definitely the rise in the Grüneisen parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Samples of 11,000 King George whiting (Sillaginodes punctata) from the South Australian commercial and recreational catch, supplemented by research samples, were aged from otoliths. Samples were analyzed from three coastal regions and by sex. Most sampling was undertaken at fish processing plants, from which only fish longer than the legal minimum length were obtained. A left-truncated normal distribution of lengths at monthly age was therefore employed as model likelihood. Mean length-at-monthly-age was described by a generalized von Bertalanffy formula with sinusoidal seasonality. Likelihood standard deviation was modeled to vary allometrically with mean length. A range of related formulas (with 6 to 8 parameters) for seasonal mean length at age were compared. In addition to likelihood ratio tests of relative fit, model selection criteria were a minimum occurrence of high uncertainties (>20% SE), of high correlations (>0.9, >0.95, and >0.99) and of parameter estimates at their biological limits, and we sought a model with a minimum number of parameters. A generalized von Bertalanffy formula with t0 fixed at 0 was chosen. The truncated likelihood alleviated the overestimation bias of mean length at age that would otherwise accrue from catch samples being restricted to legal sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear non-Gaussian state-space models arise in numerous applications in control and signal processing. Sequential Monte Carlo (SMC) methods, also known as Particle Filters, provide very good numerical approximations to the associated optimal state estimation problems. However, in many scenarios, the state-space model of interest also depends on unknown static parameters that need to be estimated from the data. In this context, standard SMC methods fail and it is necessary to rely on more sophisticated algorithms. The aim of this paper is to present a comprehensive overview of SMC methods that have been proposed to perform static parameter estimation in general state-space models. We discuss the advantages and limitations of these methods. © 2009 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on the PSpice model of SiC-JFET element inside a SiCED cascode device. The device model parameters are extracted from the I-V and C-V characterization curves. In order to validate the model, an inductive test rig circuit is designed and tested. The switching loss is estimated both using oscilloscope and calorimeter. These results are found to be in good agreement with the simulated results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heritability and genetic and phenotypic correlations were estimated for juvenile growth traits of Pacific abalone Haliotis discus hannai Ino. The estimates were calculated from shell length and shell width measurements on progeny resulting from 12 half-sib families and 36 full-sib families obtained using artificial fertilization of mating three females to each male. The measurements were taken at 10, 20 and 30 d after fertilization. It was found that heritability estimates based on sire component ranged from 0.23 to 0.36 for shell length and 0.21 to 0.32 for shell width. Heritability estimates from dam component were larger than those from sire component at three ages, indicating presence of maternal effects, non-additive genetic effects and common environmental effects. Phenotypic correlations were significant at three ages (P < 0.05), with values of 0.92, 0.93 and 0.92, respectively. Genetic correlations from the paternal half-sib correlation analysis were highly positive at three ages, with values of 0.50, 0.78 and 0.81, respectively. The results suggest that selective breeding is an effective approach to improving growth traits of Pacific abalone stocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-domain modelling of single-reed woodwind instruments usually involves a lumped model of the excitation mechanism. The parameters of this lumped model have to be estimated for use in numerical simulations. Several attempts have been made to estimate these parameters, including observations of the mechanics of isolated reeds, measurements under artificial or real playing conditions and estimations based on numerical simulations. In this study an optimisation routine is presented, that can estimate reed-model parameters, given the pressure and flow signals in the mouthpiece. The method is validated, tested on a series of numerically synthesised data. In order to incorporate the actions of the player in the parameter estimation process, the optimisation routine has to be applied to signals obtained under real playing conditions. The estimated parameters can then be used to resynthesise the pressure and flow signals in the mouthpiece. In the case of measured data, as opposed to numerically synthesised data, special care needs to be taken while modelling the bore of the instrument. In fact, a careful study of various experimental datasets revealed that for resynthesis to work, the bore termination impedance should be known very precisely from theory. An example is given, where the above requirement is satisfied, and the resynthesised signals closely match the original signals generated by the player.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution ensemble simulations (Δx = 1 km) are performed with the Met Office Unified Model for the Boscastle (Cornwall, UK) flash-flooding event of 16 August 2004. Forecast uncertainties arising from imperfections in the forecast model are analysed by comparing the simulation results produced by two types of perturbation strategy. Motivated by the meteorology of the event, one type of perturbation alters relevant physics choices or parameter settings in the model's parametrization schemes. The other type of perturbation is designed to account for representativity error in the boundary-layer parametrization. It makes direct changes to the model state and provides a lower bound against which to judge the spread produced by other uncertainties. The Boscastle has genuine skill at scales of approximately 60 km and an ensemble spread which can be estimated to within ∼ 10% with only eight members. Differences between the model-state perturbation and physics modification strategies are discussed, the former being more important for triggering and the latter for subsequent cell development, including the average internal structure of convective cells. Despite such differences, the spread in rainfall evaluated at skilful scales is shown to be only weakly sensitive to the perturbation strategy. This suggests that relatively simple strategies for treating model uncertainty may be sufficient for practical, convective-scale ensemble forecasting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many efforts are currently oriented toward extracting more information from ocean color than the chlorophyll a concentration. Among biological parameters potentially accessible from space, estimates of phytoplankton cell size and light absorption by colored detrital matter (CDM) would lead to an indirect assessment of major components of the organic carbon pool in the ocean, which would benefit oceanic carbon budget models. We present here 2 procedures to retrieve simultaneously from ocean color measurements in a limited number of bands, magnitudes, and spectral shapes for both light absorption by CDM and phytoplankton, along with a size parameter for phytoplankton. The performance of the 2 procedures was evaluated using different data sets that correspond to increasing uncertainties: ( 1) measured absorption coefficients of phytoplankton, particulate detritus, and colored dissolved organic matter ( CDOM) and measured chlorophyll a concentrations and ( 2) SeaWiFS upwelling radiance measurements and chlorophyll a concentrations estimated from global algorithms. In situ data were acquired during 3 cruises, differing by their relative proportions in CDM and phytoplankton, over a continental shelf off Brazil. No local information was introduced in either procedure, to make them more generally applicable. Over the study area, the absorption coefficient of CDM at 443 nm was retrieved from SeaWiFS radiances with a relative root mean square error (RMSE) of 33%, and phytoplankton light absorption coefficients in SeaWiFS bands ( from 412 to 510 nm) were retrieved with RMSEs between 28% and 33%. These results are comparable to or better than those obtained by 3 published models. In addition, a size parameter of phytoplankton and the spectral slope of CDM absorption were retrieved with RMSEs of 17% and 22%, respectively. If these methods are applied at a regional scale, the performances could be substantially improved by locally tuning some empirical relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to estimate the components of variance and genetic parameters for the visual scores which constitute the Morphological Evaluation System (MES), such as body structure (S), precocity (P) and musculature (M) in Nellore beef-cattle at the weaning and yearling stages, by using threshold Bayesian models. The information used for this was gleaned from visual scores of 5,407 animals evaluated at the weaning and 2,649 at the yearling stages. The genetic parameters for visual score traits were estimated through two-trait analysis, using the threshold animal model, with Bayesian statistics methodology and MTGSAM (Multiple Trait Gibbs Sampler for Animal Models) threshold software. Heritability estimates for S, P and M were 0.68, 0.65 and 0.62 (at weaning) and 0.44, 0.38 and 0.32 (at the yearling stage), respectively. Heritability estimates for S, P and M were found to be high, and so it is expected that these traits should respond favorably to direct selection. The visual scores evaluated at the weaning and yearling stages might be used in the composition of new selection indexes, as they presented sufficient genetic variability to promote genetic progress in such morphological traits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)