104 resultados para Step Length Estimation
Resumo:
BACKGROUND: Fatty acid sugar esters are used as non-ionic surfactants in cosmetics, foodstuffs and pharmaceuticals. In particular, monoesters of xylitol have attracted industrial interest due to their outstanding biological activities. In this work, xylitol monoesters were obtained by chemoenzymatic synthesis, in which, first, xylitol was made soluble in organic solvent by chemo-protecting reaction, followed by enzymatic esterification reaction using different acyl donors. A commercial immobilized Candida antartica lipase was used as catalyst, and reactions with pure xylitol were carried out to generate data for comparison. RESULTS: t-BuOH was found to be the most suitable solvent to carry out esterification reactions with both pure and protected xylitol. The highest yields were obtained for reactions carried out with pure xylitol, but in this case by-products, such as di- and tri-esters isomers were formed, which required a multi-step purification process. For the systems with protected xylitol, conversions of 86%, 58% and 24% were achieved using oleic, lauric and butyric acids, respectively. The structures of the monoesters were confirmed by (13)C- and (1)H-NMR and microanalysis. CONCLUSION: The chemoenzymatic synthesis of xylitol monoesters avoided laborious downstream processing when compared with reactions performed with pure xylitol. Monoesters production from protected xylitol was shown to be a practical, economical, and clean route for this process, allowing a simple separation, because there are no other products formed besides xylitol monoesters and residual xylitol. (C) 2009 Society of Chemical Industry
Resumo:
Appropriate pain assessment is very important for managing chronic pain. Given the cultural differences in verbally expressing pain and in psychosocial problems, specific tools are needed. The goal of this study was to identify and validate Brazilian pain descriptors. A purposive sample of health professionals and chronic pain patients was recruited. Four studies were conducted using direct and indirect psychophysical methods: category estimation, magnitude estimation, and magnitude estimation and tine-length. Results showed the descriptors which best describe chronic pain in Brazilian culture and demonstrated that there is not a significant correlation between patients and health professionals and that the psychophysical scale of judgment of pain descriptors is valid, stable, and consistent. Results reinforced that the translations of word descriptors and research tools into another language may be inappropriate, owing to differences in perception and communication and the inadequacy of exact translations to reflect the intended meaning. Given the complexity of the chronic pain, personal suffering involved, and the need for accurate assessment of chronic pain using descriptors stemming from Brazilian culture and language, it is essential to investigate the most adequate words to describe chronic pain. Although it requires more refinement, the Brazilian chronic pain descriptors can be used further to develop a multidimensional pain assessment tool that is culturally sensitive. (C) 2009 by the American Society for Pain Management Nursing
Resumo:
The crossflow filtration process differs of the conventional filtration by presenting the circulation flow tangentially to the filtration surface. The conventional mathematical models used to represent the process have some limitations in relation to the identification and generalization of the system behaviour. In this paper, a system based on artificial neural networks is developed to overcome the problems usually found in the conventional mathematical models. More specifically, the developed system uses an artificial neural network that simulates the behaviour of the crossflow filtration process in a robust way. Imprecisions and uncertainties associated with the measurements made on the system are automatically incorporated in the neural approach. Simulation results are presented to justify the validity of the proposed approach. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
This work extends a previously presented refined sandwich beam finite element (FE) model to vibration analysis, including dynamic piezoelectric actuation and sensing. The mechanical model is a refinement of the classical sandwich theory (CST), for which the core is modelled with a third-order shear deformation theory (TSDT). The FE model is developed considering, through the beam length, electrically: constant voltage for piezoelectric layers and quadratic third-order variable of the electric potential in the core, while meclianically: linear axial displacement, quadratic bending rotation of the core and cubic transverse displacement of the sandwich beam. Despite the refinement of mechanical and electric behaviours of the piezoelectric core, the model leads to the same number of degrees of freedom as the previous CST one due to a two-step static condensation of the internal dof (bending rotation and core electric potential third-order variable). The results obtained with the proposed FE model are compared to available numerical, analytical and experimental ones. Results confirm that the TSDT and the induced cubic electric potential yield an extra stiffness to the sandwich beam. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine`s capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine`s potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.
Resumo:
This paper deals with analysis of multiple random crack propagation in two-dimensional domains using the boundary element method (BEM). BEM is known to be a robust and accurate numerical technique for analysing this type of problem. The formulation adopted in this work is based on the dual BEM, for which singular and hyper-singular integral equations are used. We propose an iterative scheme to predict the crack growth path and the crack length increment at each time step. The proposed scheme able us to simulate localisation and coalescence phenomena, which is the main contribution of this paper. Considering the fracture mechanics analysis, the displacement correlation technique is applied to evaluate the stress intensity factors. The propagation angle and the equivalent stress intensity factor are calculated using the theory of maximum circumferential stress. Examples of simple and multi-fractured domains, loaded up to the rupture, are considered to illustrate the applicability of the proposed scheme. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an experimental analysis of the confinement effects in steel-concrete composite columns regarding two parameters: concrete compressive strength and column slenderness. Sixteen concrete-filled steel tubular columns with circular cross section were tested under axial loading. The tested columns were filled by concrete with compressive strengths of 30, 60. 80, and 100 MPa, and had length/diameter ratios of 3, 5, 7, and 10. The experimental values of the columns` ultimate load were compared to the predictions of 4 code provisions: the Brazilian Code NBR 8800:2008, Eurocode 4 (EN 1994-1-1:2004), AINSI/AISC 360:2005, and CAN/CSA S16-01:2001. According to the results, the load capacity of the composite columns increased with increasing concrete strength and decreased with increasing length/diameter ratio. In general, the code provisions were highly accurate in the prediction of column capacity. Among them, the Brazilian Code was the most conservative, while Eurocode 4 presented the values closest to the experimental results. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Three-dimensional discretizations used in numerical analyses of tunnel construction normally include excavation step lengths much shorter than tunnel cross-section dimensions. Simulations have usually worked around this problem by using excavation steps that are much larger than the actual physical steps used in a real tunnel excavation. In contrast, the analyses performed in this study were based on finely discretized meshes capable of reproducing the excavation lengths actually used in tunnels, and the results obtained for internal forces are up to 100% greater than those found in other analyses available in the literature. Whereas most reports conclude that internal forces depend on support delay length alone, this study shows that geometric path dependency (reflected by excavation round length) is very strong, even considering linear elasticity. Moreover, many other solutions found in the literature have also neglected the importance of the relative stiffness between the ground mass and support structure, probably owing to the relatively coarse meshes used in these studies. The analyses presented here show that relative stiffness may account for internal force discrepancies in the order of 60%. A dimensionless expression that takes all these parameters into account is presented as a good approximation for the load transfer mechanism at the tunnel face.
Resumo:
The aim of this study is to quantify the mass transfer velocity using turbulence parameters from simultaneous measurements of oxygen concentration fields and velocity fields. The surface divergence model was considered in more detail, using data obtained for the lower range of beta (surface divergence). It is shown that the existing models that use the divergence concept furnish good predictions for the transfer velocity also for low values of beta, in the range of this study. Additionally, traditional conceptual models, such as the film model, the penetration-renewal model, and the large eddy model, were tested using the simultaneous information of concentration and velocity fields. It is shown that the film and the surface divergence models predicted the mass transfer velocity for all the range of the equipment Reynolds number used here. The velocity measurements showed viscosity effects close to the surface, which indicates that the surface was contaminated with some surfactant. Considering the results, this contamination can be considered slight for the mass transfer predictions. (C) 2009 American Institute of Chemical Engineers AIChE J, 56: 2005-2017; 2010
Resumo:
Ammonium nitrogen removal from a synthetic wastewater by nitrification and denitrification processes were performed in a sequencing batch biofilm reactor containing immobilized biomass on polyurethane foam with circulation of the liquid-phase. It was analyzed the effect of four external carbon sources (ethanol, acetate, carbon synthetic medium and methanol) acting as electron donors in the denitrifying process. The experiments were conducted with intermittent aeration and operated at 30+/-1 degrees C in 8-h cycles. The synthetic wastewater (100 mgCOD/L and 50 mgNH(4)(+)-N/L) was added batch-wise, while the external carbon sources were added fed-batch-wise during the periods where aeration was suspended. Ammonium nitrogen removal efficiencies obtained were 95.7, 94.3 and 97.5% for ethanol, acetate and carbon synthetic medium, respectively. As to nitrite, nitrate and ammonium nitrogen effluent concentrations, the results obtained were, respectively: 0.1, 5.7 and 1.4 mg/L for ethanol; 0.2, 4.1 and 1.8 mg/L for acetate and 0.2, 6.7 and 0.8 for carbon synthetic medium. On the other hand using methanol, even at low concentrations (50% of the stoichiometric value calculated for complete denitrification), resulted in increasing accumulation of nitrate and ammonium nitrogen in the effluent over time.
Resumo:
Fault resistance is a critical component of electric power systems operation due to its stochastic nature. If not considered, this parameter may interfere in fault analysis studies. This paper presents an iterative fault analysis algorithm for unbalanced three-phase distribution systems that considers a fault resistance estimate. The proposed algorithm is composed by two sub-routines, namely the fault resistance and the bus impedance. The fault resistance sub-routine, based on local fault records, estimates the fault resistance. The bus impedance sub-routine, based on the previously estimated fault resistance, estimates the system voltages and currents. Numeric simulations on the IEEE 37-bus distribution system demonstrate the algorithm`s robustness and potential for offline applications, providing additional fault information to Distribution Operation Centers and enhancing the system restoration process. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper considers the optimal linear estimates recursion problem for discrete-time linear systems in its more general formulation. The system is allowed to be in descriptor form, rectangular, time-variant, and with the dynamical and measurement noises correlated. We propose a new expression for the filter recursive equations which presents an interesting simple and symmetric structure. Convergence of the associated Riccati recursion and stability properties of the steady-state filter are provided. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new methodology to estimate harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The main advantage in using such a technique relies upon its modeling facilities as well as its potential to solve fairly complex problems. The problem-solving algorithm herein proposed makes use of data from various power-quality (PQ) meters, which can either be synchronized by high technology global positioning system devices or by using information from a fundamental frequency load flow. This second approach makes the overall PQ monitoring system much less costly. The algorithm is applied to an IEEE test network, for which sensitivity analysis is performed to determine how the parameters of the ES can be selected so that the algorithm performs in an effective way. Case studies show fairly promising results and the robustness of the proposed method.