38 resultados para Multi-model inference
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
Due to manufacturing or damage process, brittle materials present a large number of micro-cracks which are randomly distributed. The lifetime of these materials is governed by crack propagation under the applied mechanical and thermal loadings. In order to deal with these kinds of materials, the present work develops a boundary element method (BEM) model allowing for the analysis of multiple random crack propagation in plane structures. The adopted formulation is based on the dual BEM, for which singular and hyper-singular integral equations are used. An iterative scheme to predict the crack growth path and crack length increment is proposed. This scheme enables us to simulate the localization and coalescence phenomena, which are the main contribution of this paper. Considering the fracture mechanics approach, the displacement correlation technique is applied to evaluate the stress intensity factors. The propagation angle and the equivalent stress intensity factor are calculated using the theory of maximum circumferential stress. Examples of multi-fractured domains, loaded up to rupture, are considered to illustrate the applicability of the proposed method. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A nonlinear finite element model was developed to simulate the nonlinear response of three-leaf masonry specimens, which were subjected to laboratory tests with the aim of investigating the mechanical behaviour of multiple-leaf stone masonry walls up to failure. The specimens consisted of two external leaves made of stone bricks and mortar joints, and an internal leaf in mortar and stone aggregate. Different loading conditions, typologies of the collar joints, and stone types were taken into account. The constitutive law implemented in the model is characterized by a damage tensor, which allows the damage-induced anisotropy accompanying the cracking process to be described. To follow the post-peak behaviour of the specimens with sufficient accuracy it was necessary to make the damage model non-local, to avoid mesh-dependency effects related to the strain-softening behaviour of the material. Comparisons between the predicted and measured failure loads are quite satisfactory in most of the studied cases. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved
Resumo:
In this article, we discuss inferential aspects of the measurement error regression models with null intercepts when the unknown quantity x (latent variable) follows a skew normal distribution. We examine first the maximum-likelihood approach to estimation via the EM algorithm by exploring statistical properties of the model considered. Then, the marginal likelihood, the score function and the observed information matrix of the observed quantities are presented allowing direct inference implementation. In order to discuss some diagnostics techniques in this type of models, we derive the appropriate matrices to assessing the local influence on the parameter estimates under different perturbation schemes. The results and methods developed in this paper are illustrated considering part of a real data set used by Hadgu and Koch [1999, Application of generalized estimating equations to a dental randomized clinical trial. Journal of Biopharmaceutical Statistics, 9, 161-178].
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we discuss inferential aspects for the Grubbs model when the unknown quantity x (latent response) follows a skew-normal distribution, extending early results given in Arellano-Valle et al. (J Multivar Anal 96:265-281, 2005b). Maximum likelihood parameter estimates are computed via the EM-algorithm. Wald and likelihood ratio type statistics are used for hypothesis testing and we explain the apparent failure of the Wald statistics in detecting skewness via the profile likelihood function. The results and methods developed in this paper are illustrated with a numerical example.
Resumo:
This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.
Resumo:
In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.
Resumo:
Mature weight breeding values were estimated using a multi-trait animal model (MM) and a random regression animal model (RRM). Data consisted of 82 064 weight records from 8 145 animals, recorded from birth to eight years of age. Weights at standard ages were considered in the MM. All models included contemporary groups as fixed effects, and age of dam (linear and quadratic effects) and animal age as covariates. In the RRM, mean trends were modelled through a cubic regression on orthogonal polynomials of animal age and genetic maternal and direct and maternal permanent environmental effects were also included as random. Legendre polynomials of orders 4, 3, 6 and 3 were used for animal and maternal genetic and permanent environmental effects, respectively, considering five classes of residual variances. Mature weight (five years) direct heritability estimates were 0.35 (MM) and 0.38 (RRM). Rank correlation between sires' breeding values estimated by MM and RRM was 0.82. However, selecting the top 2% (12) or 10% (62) of the young sires based on the MM predicted breeding values, respectively 71% and 80% of the same sires would be selected if RRM estimates were used instead. The RRM modelled the changes in the (co) variances with age adequately and larger breeding value accuracies can be expected using this model.
Resumo:
This study proposes a simplified mathematical model to describe the processes occurring in an anaerobic sequencing batch biofilm reactor (ASBBR) treating lipid-rich wastewater. The reactor, subjected to rising organic loading rates, contained biomass immobilized cubic polyurethane foam matrices, and was operated at 32 degrees C +/- 2 degrees C, using 24-h batch cycles. In the adaptation period, the reactor was fed with synthetic substrate for 46 days and was operated without agitation. Whereas agitation was raised to 500 rpm, the organic loading rate (OLR) rose from 0.3 g chemical oxygen demand (COD) . L(-1) . day(-1) to 1.2 g COD . L(-1) . day(-1). The ASBBR was fed fat-rich wastewater (dairy wastewater), in an operation period lasting for 116 days, during which four operational conditions (OCs) were tested: 1.1 +/- 0.2 g COD . L(-1) . day(-1) (OC1), 4.5 +/- 0.4 g COD . L(-1) . day(-1) (OC2), 8.0 +/- 0.8 g COD . L(-1) . day(-1) (OC3), and 12.1 +/- 2.4 g COD . L(-1) . day(-1) (OC4). The bicarbonate alkalinity (BA)/COD supplementation ratio was 1:1 at OC1, 1:2 at OC2, and 1:3 at OC3 and OC4. Total COD removal efficiencies were higher than 90%, with a constant production of bicarbonate alkalinity, in all OCs tested. After the process reached stability, temporal profiles of substrate consumption were obtained. Based on these experimental data a simplified first-order model was fit, making possible the inference of kinetic parameters. A simplified mathematical model correlating soluble COD with volatile fatty acids (VFA) was also proposed, and through it the consumption rates of intermediate products as propionic and acetic acid were inferred. Results showed that the microbial consortium worked properly and high efficiencies were obtained, even with high initial substrate concentrations, which led to the accumulation of intermediate metabolites and caused low specific consumption rates.
Resumo:
Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims. We here discuss the properties of the double cluster Abell 1758 at a redshift z similar to 0.279. These clusters show strong evidence for merging. Methods. We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 x 1.16 deg(2), or 16.1 x 17.6 Mpc(2). Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results. The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical simulations, we could mimic the most prominent features present in the metallicity map and propose an explanation for the dynamical history of the cluster. We found in particular that in the metal-rich elongated regions of the North cluster, winds had been more efficient than ram-pressure stripping in transporting metal-enriched gas to the outskirts. Conclusions. We confirm the merging structure of the North and South clusters, both at optical and X-ray wavelengths.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
This paper investigates the validity of a simplified equivalent reservoir representation of a multi-reservoir hydroelectric system for modelling its optimal operation for power maximization. This simplification, proposed by Arvanitidis and Rosing (IEEE Trans Power Appar Syst 89(2):319-325, 1970), imputes a potential energy equivalent reservoir with energy inflows and outflows. The hydroelectric system is also modelled for power maximization considering individual reservoir characteristics without simplifications. Both optimization models employed MINOS package for solution of the non-linear programming problems. A comparison between total optimized power generation over the planning horizon by the two methods shows that the equivalent reservoir is capable of producing satisfactory power estimates with less than 6% underestimation. The generation and total reservoir storage trajectories along the planning horizon obtained by equivalent reservoir method, however, presented significant discrepancies as compared to those found in the detailed modelling. This study is motivated by the fact that Brazilian generation system operations are based on the equivalent reservoir method as part of the power dispatch procedures. The potential energy equivalent reservoir is an alternative which eliminates problems with the dimensionality of state variables in a dynamic programming model.
Resumo:
This paper addresses the development of several alternative novel hybrid/multi-field variational formulations of the geometrically exact three-dimensional elastostatic beam boundary-value problem. In the framework of the complementary energy-based formulations, a Legendre transformation is used to introduce the complementary energy density in the variational statements as a function of stresses only. The corresponding variational principles are shown to feature stationarity within the framework of the boundary-value problem. Both weak and linearized weak forms of the principles are presented. The main features of the principles are highlighted, giving special emphasis to their relationships from both theoretical and computational standpoints. (C) 2010 Elsevier Ltd. All rights reserved.