203 resultados para Science methodology
Resumo:
A simple calorimetric method to estimate both kinetics and heat transfer coefficients using temperature-versus-time data under non-adiabatic conditions is described for the reaction of hydrolysis of acetic anhydride. The methodology is applied to three simple laboratory-scale reactors in a very simple experimental setup that can be easily implemented. The quality of the experimental results was verified by comparing them with literature values and with predicted values obtained by energy balance. The comparison shows that the experimental kinetic parameters do not agree exactly with those reported in the literature, but provide a good agreement between predicted and experimental data of temperature and conversion. The differences observed between the activation energy obtained and the values reported in the literature can be ascribed to differences in anhydride-to-water ratios (anhydride concentrations). (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Thermodynamic relations between the solubility of a protein and the solution pH are presented in this work. The hypotheses behind the development are that the protein chemical potential in liquid phase can be described by Henry`s law and that the solid-liquid equilibrium is established only between neutral molecules. The mathematical development results in an analytical expression of the solubility curve, as a function of the ionization equilibrium constants, the pH and the solubility at the isoelectric point. It is shown that the same equation can be obtained either by directly calculating the fraction of neutral protein molecules or by integrating the curve of the protein average charge. The methodology was successfully applied to the description of the solubility of porcine insulin as a function of pH at three different temperatures and of bovine beta-lactoglobulin at four different ionic strengths. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The paper proposes a methodology and design rules for organisational structures facing higher necessity of rapidly reconfigure themselves to cope with unpredictable situations-new markets, new products, changing mix of production, problems in production process or flows etc. It implies changing and often conflictive criteria for production goals and for the allocation of work. The methodology was developed based on a large field action research and consulting. Their basis is the design of auto-reconfigurable working groups-or groups with variable geometry, depending on the events to face. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Electromagnetic suspension systems are inherently nonlinear and often face hardware limitation when digitally controlled. The main contributions of this paper are: the design of a nonlinear H(infinity) controller. including dynamic weighting functions, applied to a large gap electromagnetic suspension system and the presentation of a procedure to implement this controller on a fixed-point DSP, through a methodology able to translate a floating-point algorithm into a fixed-point algorithm by using l(infinity) norm minimization due to conversion error. Experimental results are also presented, in which the performance of the nonlinear controller is evaluated specifically in the initial suspension phase. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In this study, we investigate the possibility of mode localization occurrence in a non-periodic Pfluger`s column model of a rocket with an intermediate concentrated mass at its middle point. We discuss the effects of varying the intermediate mass magnitude and its position and the resulting energy confinement for two cases. Free vibration analysis and the severity of mode localization are appraised, without decoupling the system, by considering as a solution basis the fundamental free response or dynamical solution. This allows for the reduction of the dimension of the algebraic modal equation that arises from satisfying the boundary and continuity conditions. By using the same methodology, we also consider the case of a cantilevered Pluger`s column with rotational stiffness at the middle support instead of an intermediate concentrated mass. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A rigorous derivation of non-linear equations governing the dynamics of an axially loaded beam is given with a clear focus to develop robust low-dimensional models. Two important loading scenarios were considered, where a structure is subjected to a uniformly distributed axial and a thrust force. These loads are to mimic the main forces acting on an offshore riser, for which an analytical methodology has been developed and applied. In particular, non-linear normal modes (NNMs) and non-linear multi-modes (NMMs) have been constructed by using the method of multiple scales. This is to effectively analyse the transversal vibration responses by monitoring the modal responses and mode interactions. The developed analytical models have been crosschecked against the results from FEM simulation. The FEM model having 26 elements and 77 degrees-of-freedom gave similar results as the low-dimensional (one degree-of-freedom) non-linear oscillator, which was developed by constructing a so-called invariant manifold. The comparisons of the dynamical responses were made in terms of time histories, phase portraits and mode shapes. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Vessel dynamic positioning (DP) systems are based on conventional PID-type controllers and an extended Kalman filter. However, they present a difficult tuning procedure, and the closed-loop performance varies with environmental or loading conditions since the dynamics of the vessel are eminently nonlinear. Gain scheduling is normally used to address the nonlinearity of the system. To overcome these problems, a sliding mode control was evaluated. This controller is robust to variations in environmental and loading conditions, it maintains performance and stability for a large range of conditions, and presents an easy tuning methodology. The performance of the controller was evaluated numerically and experimentally in order to address its effectiveness. The results are compared with those obtained from conventional PID controller. (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work deals with a procedure for model re-identification of a process in closed loop with ail already existing commercial MPC. The controller considered here has a two-layer structure where the upper layer performs a target calculation based on a simplified steady-state optimization of the process. Here, it is proposed a methodology where a test signal is introduced in a tuning parameter of the target calculation layer. When the outputs are controlled by zones instead of at fixed set points, the approach allows the continuous operation of the process without an excessive disruption of the operating objectives as process constraints and product specifications remain satisfied during the identification test. The application of the method is illustrated through the simulation of two processes of the oil refining industry. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This study presents a methodology for the characterization of construction and demolition (C&D) waste recycled aggregates based on a combination of analytical techniques (X-ray fluorescence (XRF), soluble ions, semi-quantitative X-ray diffraction (XRD), thermogravimetric analysis (TCA-DTG) and hydrochloric acid (HCl) selective dissolution). These combined analytical techniques allow for the estimation of the amount of cement paste, its most important hydrated and carbonated phases, as well as the amount of clay and micas. Details of the methodology are presented here and the results of three representative C&D samples taken from the Sao Paulo region in Brazil are discussed. Chemical compositions of mixed C&D aggregate samples have mostly been influenced by particle size rather than the visual classification of C&D into red or grey and geographical origin. The amount of measured soluble salts in C&D aggregates (0.15-25.4 mm) is lower than the usual limits for mortar and concrete production. The content of porous cement paste in the C&D aggregates is around 19.3% (w/w). However, this content is significantly lower than the 43% detected for the C&D powders (< 0.15 min). The clay content of the powders was also high, potentially resulting from soil intermixed with the C&D waste, as well as poorly burnt red ceramic. Since only about 50% of the measured CaO is combined with CO(2), the powders have potential use as raw materials for the cement industry. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Probable consequences of the mitigation of citrus canker eradication methodology in Sao Paulo state Recently the Sao Paulo state government mitigated its citrus canker eradication methodology adopted since 1999. In April 2009 at least 99.8% of commercial sweet orange orchards were free of citrus canker in Sao Paulo state. Consequently the mitigation of the eradication methodology reduced the high level of safety and the competitiveness of the citrus production sector in Sao Paulo state, Brazil. Therefore we suggest the re-adoption of the same eradication methodology of citrus canker adopted in Sao Paulo from 1999 to 2009, or the adoption of a new methodology, effective for citrus canker suppression, because in new sample surveys citrus canker was detected in >0.36% of affected orchards. This incidence threshold was calculated by using the Duncan test (P <= 0.05) to compare the yearly sample surveys conducted in Sao Paulo state to estimate citrus canker incidence between 1999 and 2009. The calculated minimum significant level was 0.28% among sample surveys and the lowest citrus canker incidence in Sao Paulo state was 0.08%, occurring in 2001. Thus, as an alternative, we suggest the adoption of a new eradication methodology for citrus canker suppression when a new sample survey detected >0.36% of affected orchards in Sao Paulo state, Brazil.