943 resultados para MCDONALD EXTENDED EXPONENTIAL MODEL
Resumo:
One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.
Resumo:
The objective of this work is to develop an improved model of the human thermal system. The features included are important to solve real problems: 3D heat conduction, the use of elliptical cylinders to adequately approximate body geometry, the careful representation of tissues and important organs, and the flexibility of the computational implementation. Focus is on the passive system, which is composed by 15 cylindrical elements and it includes heat transfer between large arteries and veins. The results of thermal neutrality and transient simulations are in excellent agreement with experimental data, indicating that the model represents adequately the behavior of the human thermal system. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We present a method to simulate the Magnetic Barkhausen Noise using the Random Field Ising Model with magnetic long-range interaction. The method allows calculating the magnetic flux density behavior in particular sections of the lattice reticule. The results show an internal demagnetizing effect that proceeds from the magnetic long-range interactions. This demagnetizing effect induces the appearing of a magnetic pattern in the region of magnetic avalanches. When compared with the traditional method, the proposed numerical procedure neatly reduces computational costs of simulation. (c) 2008 Published by Elsevier B.V.
Resumo:
There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.
Resumo:
It is well known that structures subjected to dynamic loads do not follow the usual similarity laws when the material is strain rate sensitive. As a consequence, it is not possible to use a scaled model to predict the prototype behaviour. In the present study, this problem is overcome by changing the impact velocity so that the model behaves exactly as the prototype. This exact solution is generated thanks to the use of an exponential constitutive law to infer the dynamic flow stress. Furthermore, it is shown that the adopted procedure does not rely on any previous knowledge of the structure response. Three analytical models are used to analyze the performance of the technique. It is shown that perfect similarity is achieved, regardless of the magnitude of the scaling factor. For the class of material used, the solution outlined has long been sought, inasmuch as it allows perfect similarity for strain rate sensitive structures subject to impact loads. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The application of functionally graded material (FGM) concept to piezoelectric transducers allows the design of composite transducers without interfaces, due to the continuous change of property values. Thus, large improvements can be achieved, as reduction of stress concentration, increasing of bonding strength, and bandwidth. This work proposes to design and to model FGM piezoelectric transducers and to compare their performance with non-FGM ones. Analytical and finite element (FE) modeling of FGM piezoelectric transducers radiating a plane pressure wave in fluid medium are developed and their results are compared. The ANSYS software is used for the FE modeling. The analytical model is based on FGM-equivalent acoustic transmission-line model, which is implemented using MATLAB software. Two cases are considered: (i) the transducer emits a pressure wave in water and it is composed of a graded piezoceramic disk, and backing and matching layers made of homogeneous materials; (ii) the transducer has no backing and matching layer; in this case, no external load is simulated. Time and frequency pressure responses are obtained through a transient analysis. The material properties are graded along thickness direction. Linear and exponential gradation functions are implemented to illustrate the influence of gradation on the transducer pressure response, electrical impedance, and resonance frequencies. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
In this work, the stress relaxation behavior of PMMA/PS blends, with or without random copolymer addition, submitted to step shear strain experiments in the linear and nonlinear regime was studied. The effect of blend composition (ranging from 10 to 30 wt.% of dispersed phase), viscosity ratio (ranging from 0.1 to 7.5), and random copolymer addition (for concentrations up to 8 wt.% with respect to the dispersed phase) was evaluated and correlated to the evolution of the morphology of the blends. All blends presented three relaxation stages: a first fast relaxation which was attributed to the relaxation of the pure phases, a second one which was characterized by the presence of a plateau, and a third fast one. The relaxation was shown to be faster for less extended and smaller droplets and to be influenced by coalescence for blends with a dispersed phase concentration larger than 20 wt.%. The relaxation of the blend was strongly influenced by the matrix viscosity. The addition of random copolymer resulted in a slower relaxation of the droplets.
Resumo:
A multiphase deterministic mathematical model was implemented to predict the formation of the grain macrostructure during unidirectional solidification. The model consists of macroscopic equations of energy, mass, and species conservation coupled with dendritic growth models. A grain nucleation model based on a Gaussian distribution of nucleation undercoolings was also adopted. At some solidification conditions, the cooling curves calculated with the model showed oscillations (""wiggles""), which prevented the correct prediction of the average grain size along the structure. Numerous simulations were carried out at nucleation conditions where the oscillations are absent, enabling an assessment of the effect of the heat transfer coefficient on the average grain size and columnar-to-equiaxed transition.
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The main scope of this work is the implementation of an MPC that integrates the control and the economic optimization of the system. The two problems are solved simultaneously through the modification of the control cost function that includes an additional term related to the economic objective. The optimizing MPC is based on a quadratic program (QP) as the conventional MPC and can be solved with the available QP solvers. The method was implemented in an industrial distillation system, and the results show that the approach is efficient and can be used, in several practical cases. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The photodegradation of the herbicide clomazone in the presence of S(2)O(8)(2-) or of humic substances of different origin was investigated. A value of (9.4 +/- 0.4) x 10(8) m(-1) s(-1) was measured for the bimolecular rate constant for the reaction of sulfate radicals with clomazone in flash-photolysis experiments. Steady state photolysis of peroxydisulfate, leading to the formation of the sulfate radicals, in the presence of clomazone was shown to be an efficient photodegradation method of the herbicide. This is a relevant result regarding the in situ chemical oxidation procedures involving peroxydisulfate as the oxidant. The main reaction products are 2-chlorobenzylalcohol and 2-chlorobenzaldehyde. The degradation kinetics of clomazone was also studied under steady state conditions induced by photolysis of Aldrich humic acid or a vermicompost extract (VCE). The results indicate that singlet oxygen is the main species responsible for clomazone degradation. The quantum yield of O(2)(a(1)Delta(g)) generation (lambda = 400 nm) for the VCE in D(2)O, Phi(Delta) = (1.3 +/- 0.1) x 10(-3), was determined by measuring the O(2)(a(1)Delta(g)) phosphorescence at 1270 nm. The value of the overall quenching constant of O(2)(a(1)Delta(g)) by clomazone was found to be (5.7 +/- 0.3) x 10(7) m(-1) s(-1) in D(2)O. The bimolecular rate constant for the reaction of clomazone with singlet oxygen was k(r) = (5.4 +/- 0.1) x 10(7) m(-1) s(-1), which means that the quenching process is mainly reactive.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.