28 resultados para Fourth order method
Resumo:
By coupling the Boundary Element Method (BEM) and the Finite Element Method (FEM) an algorithm that combines the advantages of both numerical processes is developed. The main aim of the work concerns the time domain analysis of general three-dimensional wave propagation problems in elastic media. In addition, mathematical and numerical aspects of the related BE-, FE- and BE/FE-formulations are discussed. The coupling algorithm allows investigations of elastodynamic problems with a BE- and a FE-subdomain. In order to observe the performance of the coupling algorithm two problems are solved and their results compared to other numerical solutions.
Resumo:
The determination of the intersection curve between Bézier Surfaces may be seen as the composition of two separated problems: determining initial points and tracing the intersection curve from these points. The Bézier Surface is represented by a parametric function (polynomial with two variables) that maps a point in the tridimensional space from the bidimensional parametric space. In this article, it is proposed an algorithm to determine the initial points of the intersection curve of Bézier Surfaces, based on the solution of polynomial systems with the Projected Polyhedral Method, followed by a method for tracing the intersection curves (Marching Method with differential equations). In order to allow the use of the Projected Polyhedral Method, the equations of the system must be represented in terms of the Bernstein basis, and towards this goal it is proposed a robust and reliable algorithm to exactly transform a multivariable polynomial in terms of power basis to a polynomial written in terms of Bernstein basis .
Resumo:
The Shadow Moiré fringe patterns are level lines of equal depth generated by interference between a master grid and its shadow projected on the surface. In simplistic approach, the minimum error is about the order of the master grid pitch, that is, always larger than 0,1 mm, resulting in an experimental technique of low precision. The use of a phase shift increases the accuracy of the Shadow Moiré technique. The current work uses the phase shifting method to determine the surfaces three-dimensional shape using isothamic fringe patterns and digital image processing. The current study presents the method and applies it to images obtained by simulation for error evaluation, as well as to a buckled plate, obtaining excellent results. The method hands itself particularly useful to decrease the errors in the interpretation of the Moiré fringes that can adversely affect the calculations of displacements in pieces containing many concave and convex regions in relatively small areas.
Resumo:
The concept of Process Management has been used by managers and consultants that search for the improvement of both operational or managerial industrial processes. Its strength is in focusing on the external client and on the optimization of the internal process in order to fulfill their needs. By the time the needs of internal clients are being sought, a set of improvements takes place. The Taguchi method, because of its claim for knowledge share between design engineers and people engaged in the process, is a candidate for process management implementation. The objective of this paper is to propose that kind of application aiming for improvements related with reliability of results revealed by the robust design of Taguchi method.
Resumo:
The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.
Resumo:
The mathematical model for two-dimensional unsteady sonic flow, based on the classical diffusion equation with imaginary coefficient, is presented and discussed. The main purpose is to develop a rigorous formulation in order to bring into light the correspondence between the sonic, supersonic and subsonic panel method theory. Source and doublet integrals are obtained and Laplace transformation demonstrates that, in fact, the source integral is the solution of the doublet integral equation. It is shown that the doublet-only formulation reduces to a Volterra integral equation of the first kind and a numerical method is proposed in order to solve it. To the authors' knowledge this is the first reported solution to the unsteady sonic thin airfoil problem through the use of doublet singularities. Comparisons with the source-only formulation are shown for the problem of a flat plate in combined harmonic heaving and pitching motion.
Resumo:
We apply the Bogoliubov Averaging Method to the study of the vibrations of an elastic foundation, forced by a Non-ideal energy source. The considered model consists of a portal plane frame with quadratic nonlinearities, with internal resonance 1:2, supporting a direct current motor with limited power. The non-ideal excitation is in primary resonance in the order of one-half with the second mode frequency. The results of the averaging method, plotted in time evolution curve and phase diagrams are compared to those obtained by numerically integrating of the original differential equations. The presence of the saturation phenomenon is verified by analytical procedures.
Resumo:
Large volumes of plasma can be fractionated by the method of Cohn at low cost. However, liquid chromatography is superior in terms of the quality of the product obtained. In order to combine the advantages of each method, we developed an integrated method for the production of human albumin and immunoglobulin G (IgG). The cryoprecipitate was first removed from plasma for the production of factor VIII and the supernatant of the cryoprecipitate was fractionated by the method of Cohn. The first precipitate, containing fractions (F)-I + II + III, was used for the production of IgG by the chromatographic method (see Tanaka K et al. (1998) Brazilian Journal of Medical and Biological Research, 31: 1375-1381). The supernatant of F-I + II + III was submitted to a second precipitation and F-IV was obtained and discarded. Albumin was obtained from the supernatant of the precipitate F-IV by liquid chromatography, ion-exchange on DEAE-Sepharose FF, filtration through Sephacryl S-200 HR and introduction of heat treatment for fatty acid precipitation. Viral inactivation was performed by pasteurization at 60ºC for 10 h. The albumin product obtained by the proposed procedure was more than 99% pure for the 15 lots of albumin produced, with a mean yield of 25.0 ± 0.5 g/l plasma, containing 99.0 to 99.3% monomer, 0.7 to 1.0% dimers, and no polymers. Prekallikrein activator levels were <=5 IU/ml. This product satisfies the requirements of the 1997 Pharmacopée Européenne.
Resumo:
Arterial baroreflex sensitivity estimated by pharmacological impulse stimuli depends on intrinsic signal variability and usually a subjective choice of blood pressure (BP) and heart rate (HR) values. We propose a semi-automatic method to estimate cardiovascular reflex sensitivity to bolus infusions of phenylephrine and nitroprusside. Beat-to-beat BP and HR time series for male Wistar rats (N = 13) were obtained from the digitized signal (sample frequency = 2 kHz) and analyzed by the proposed method (PRM) developed in Matlab language. In the PRM, time series were low-pass filtered with zero-phase distortion (3rd order Butterworth used in the forward and reverse direction) and presented graphically, and parameters were selected interactively. Differences between basal mean values and peak BP (deltaBP) and HR (deltaHR) values after drug infusions were used to calculate baroreflex sensitivity indexes, defined as the deltaHR/deltaBP ratio. The PRM was compared to the method traditionally (TDM) employed by seven independent observers using files for reflex bradycardia (N = 43) and tachycardia (N = 61). Agreement was assessed by Bland and Altman plots. Dispersion among users, measured as the standard deviation, was higher for TDM for reflex bradycardia (0.60 ± 0.46 vs 0.21 ± 0.26 bpm/mmHg for PRM, P < 0.001) and tachycardia (0.83 ± 0.62 vs 0.28 ± 0.28 bpm/mmHg for PRM, P < 0.001). The advantage of the present method is related to its objectivity, since the routine automatically calculates the desired parameters according to previous software instructions. This is an objective, robust and easy-to-use tool for cardiovascular reflex studies.
Resumo:
The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.
Resumo:
Permanent bilateral occlusion of the common carotid arteries (2VO) in the rat has been established as a valid experimental model to investigate the effects of chronic cerebral hypoperfusion on cognitive function and neurodegenerative processes. Our aim was to compare the cognitive and morphological outcomes following the standard 2VO procedure, in which there is concomitant artery ligation, with those of a modified protocol, with a 1-week interval between artery occlusions to avoid an abrupt reduction of cerebral blood flow, as assessed by animal performance in the water maze and damage extension to the hippocampus and striatum. Male Wistar rats (N = 47) aged 3 months were subjected to chronic hypoperfusion by permanent bilateral ligation of the common carotid arteries using either the standard or the modified protocol, with the right carotid being the first to be occluded. Three months after the surgical procedure, rat performance in the water maze was assessed to investigate long-term effects on spatial learning and memory and their brains were processed in order to estimate hippocampal volume and striatal area. Both groups of hypoperfused rats showed deficits in reference (F(8,172) = 7.0951, P < 0.00001) and working spatial memory [2nd (F(2,44) = 7.6884, P < 0.001), 3rd (F(2,44) = 21.481, P < 0.00001) and 4th trials (F(2,44) = 28.620, P < 0.0001)]; however, no evidence of tissue atrophy was found in the brain structures studied. Despite similar behavioral and morphological outcomes, the rats submitted to the modified protocol showed a significant increase in survival rate, during the 3 months of the experiment (P < 0.02).
Resumo:
The partial replacement of NaCl by KCl is a promising alternative to produce a cheese with lower sodium content since KCl does not change the final quality of the cheese product. In order to assure proper salt proportions, mathematical models are employed to control the product process and simulate the multicomponent diffusion during the reduced salt cheese ripening period. The generalized Fick's Second Law is widely accepted as the primary mass transfer model within solid foods. The Finite Element Method (FEM) was used to solve the system of differential equations formed. Therefore, a NaCl and KCl multicomponent diffusion was simulated using a 20% (w/w) static brine with 70% NaCl and 30% KCl during Prato cheese (a Brazilian semi-hard cheese) salting and ripening. The theoretical results were compared with experimental data, and indicated that the deviation was 4.43% for NaCl and 4.72% for KCl validating the proposed model for the production of good quality, reduced-sodium cheeses.
Resumo:
A simple and low cost method to determine volatile contaminants in post-consumer recycled PET flakes was developed and validated by Headspace Dynamic Concentration and Gas Chromatography-Flame Ionization Detection (HDC-GC-FID). The analytical parameters evaluated by using surrogates include: correlation coefficient, detection limit, quantification limit, accuracy, intra-assay precision, and inter-assay precision. In order to compare the efficiency of the proposed method to recognized automated techniques, post-consumer PET packaging samples collected in Brazil were used. GC-MS was used to confirm the identity of the substances identified in the PET packaging. Some of the identified contaminants were estimated in the post-consumer material at concentrations higher than 220 ng.g-1. The findings in this work corroborate data available in the scientific literature pointing out the suitability of the proposed analytical method.