18 resultados para Layer dependent order parameters
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In this work we present a mathematical and computational modeling of electrokinetic phenomena in electrically charged porous medium. We consider the porous medium composed of three different scales (nanoscopic, microscopic and macroscopic). On the microscopic scale the domain is composed by a porous matrix and a solid phase. The pores are filled with an aqueous phase consisting of ionic solutes fully diluted, and the solid matrix consists of electrically charged particles. Initially we present the mathematical model that governs the electrical double layer in order to quantify the electric potential, electric charge density, ion adsorption and chemical adsorption in nanoscopic scale. Then, we derive the microscopic model, where the adsorption of ions due to the electric double layer and the reactions of protonation/ deprotanaç~ao and zeta potential obtained in modeling nanoscopic arise in microscopic scale through interface conditions in the problem of Stokes and Nerst-Planck equations respectively governing the movement of the aqueous solution and transport of ions. We developed the process of upscaling the problem nano/microscopic using the homogenization technique of periodic structures by deducing the macroscopic model with their respectives cell problems for effective parameters of the macroscopic equations. Considering a clayey porous medium consisting of kaolinite clay plates distributed parallel, we rewrite the macroscopic model in a one-dimensional version. Finally, using a sequential algorithm, we discretize the macroscopic model via the finite element method, along with the interactive method of Picard for the nonlinear terms. Numerical simulations on transient regime with variable pH in one-dimensional case are obtained, aiming computational modeling of the electroremediation process of clay soils contaminated
Resumo:
High-precision calculations of the correlation functions and order parameters were performed in order to investigate the critical properties of several two-dimensional ferro- magnetic systems: (i) the q-state Potts model; (ii) the Ashkin-Teller isotropic model; (iii) the spin-1 Ising model. We deduced exact relations connecting specific damages (the difference between two microscopic configurations of a model) and the above mentioned thermodynamic quanti- ties which permit its numerical calculation, by computer simulation and using any ergodic dynamics. The results obtained (critical temperature and exponents) reproduced all the known values, with an agreement up to several significant figures; of particular relevance were the estimates along the Baxter critical line (Ashkin-Teller model) where the exponents have a continuous variation. We also showed that this approach is less sensitive to the finite-size effects than the standard Monte-Carlo method. This analysis shows that the present approach produces equal or more accurate results, as compared to the usual Monte Carlo simulation, and can be useful to investigate these models in circumstances for which their behavior is not yet fully understood
Resumo:
The new technique for automatic search of the order parameters and critical properties is applied to several well-know physical systems, testing the efficiency of such a procedure, in order to apply it for complex systems in general. The automatic-search method is combined with Monte Carlo simulations, which makes use of a given dynamical rule for the time evolution of the system. In the problems inves¬tigated, the Metropolis and Glauber dynamics produced essentially equivalent results. We present a brief introduction to critical phenomena and phase transitions. We describe the automatic-search method and discuss some previous works, where the method has been applied successfully. We apply the method for the ferromagnetic fsing model, computing the critical fron¬tiers and the magnetization exponent (3 for several geometric lattices. We also apply the method for the site-diluted ferromagnetic Ising model on a square lattice, computing its critical frontier, as well as the magnetization exponent f3 and the susceptibility exponent 7. We verify that the universality class of the system remains unchanged when the site dilution is introduced. We study the problem of long-range bond percolation in a diluted linear chain and discuss the non-extensivity questions inherent to long-range-interaction systems. Finally we present our conclusions and possible extensions of this work
Resumo:
To describe retinal nerve fiber layer changes in late-stage diffuse unilateral subacute neuroretinitis eyes and compare these results with healthy eyes observed through nerve fiber analyzer (GDx®). Methods: This is a retrospective case-control study in which 49 eyes in late-stage diffuse unilateral subacute neuroretinitis were examined from May/97 to December/ 01. First, eyes with diffuse unilateral subacute neuroretinitis and healthy contralateral eyes (Control Group I) were statistically matched. Subsequently, eyes with diffuse unilateral subacute neuroretinitis were compared with eyes of healthy patients (Control Group II). Results: Eyes from Control Groups I and II had higher relative frequency of “within normal limits” status. Eyes from the diffuse unilateral subacute neuroretinitis (DUSN) Group had higher frequency of “outside normal limits” and “borderline” status. Control Groups I and II had absolute values different from the DUSN Group regarding all parameters (p<0.05), except for Symmetry in Control Groups I and II, Average thickness and Superior Integral in control group II. Conclusion: Patients with late-stage diffuse unilateral subacute neuroretinitis presented presumed decrease in nerve fiber layer thickness shown by GDx®. Retinal zones with larger vascular support and larger amount of nerve fibers presented higher decrease in the delay of the reflected light measured by the nerve fiber analyzer
Influência das espécies ativas na absorção de intersticiais durante a carbonitretação a plasma do TI
Resumo:
Physical-chemical properties of Ti are sensible to the presence of interstitial elements. In the case of thermochemical treatments plasma assisted, the influence of different active species is not still understood. In order to contribute for such knowledge, this work purposes a study of the role played by the active species atmosphere into the Ar N2 CH4 carbonitriding plasma. It was carried out a plasma diagnostic by OES (Optical Emission Spectroscopy) in the z Ar y N2 x CH4 plasma mixture, in which z, y and x indexes represent gas flow variable from 0 to 4 sccm (cm3/min). The diagnostic presents abrupt variations of emission intensities associated to the species in determined conditions. Therefore, they were selected in order to carry out the chemical treatment and then to investigate their influences. Commercial pure Ti disks were submitted to plasma carbonitriding process using pre-established conditions from the OES measurements while some parameters such as pressure and temperature were maintained constant. The concentration profiles of interstitial elements (C and N atoms) were determined by Resonant Nuclear Reaction Analysis (NRA) resulting in a depth profile plots. The reactions used were 15N(ρ,αγ)12C and 12C(α,α)12C. GIXRD (Grazing Incidence X-Ray Diffraction) analysis was used in order to identify the presence of phases on the surface. Micro-Raman spectroscopy was used in order to qualitatively study the carbon into the TiCxN1 structure. It has been verified which the density species effectively influences more the diffusion of particles into the Ti lattice and characteristics of the layer formed than the gas concentration. High intensity of N2 + (391,4 nm) and CH (387,1 nm) species promotes more diffusion of C and N. It was observed that Hα (656,3 nm) species acts like a catalyzer allowing a deeper diffusion of nitrogen and carbon into the titanium lattice.
Resumo:
Innumerable studies have focused been reported on the sleep spindles (SS), Sharp Vertex Waves (SVW) and REM, NREM Sleep as indicators interpreting EEG patterns in children. However, Frequency and Amplitud Gradient (FAG) is rarely cited sleep parameter in children,that occurs during NREM Sleep. It was first described by Slater and Torres, in 1979, but has not been routinely evaluated in EEG reports. The aim of this study was to assess the absence of SS, SVW and FAG, as an indication of neurological compromise in children. The sample consisted of 1014 EEGs of children referred to the Clinical Neurophysiology Laboratory, Hospital Universitário de Brasília (HUB), from January 1997 to March 2003, with ages ranging from 3 months to 12 years old, obtained in spontaneous sleep or induced by choral hydrate. The study was transversal and analytical, in which, visual analysis of EEG traces was perfumed individually and independently by two electroencephalographers without prior knowledge of the EEG study or neurological findings. After EEG selection, the investigators analyzed the medical reports in order to define and correlate neurological pattern was classified according to the presence or absence of neurological compromise, as Normal Neurological Pattern (NNP), and Altered Neurological Pattern (ANP) respectively. From the visual analysis of the EEG(s), it was possible to characterize 6 parameters: 1- FAG present (64,1%); 2- FAG absent (35,9%); 3 - normal SS (87,9%); 4 - altered SS s (12,1%); 5 - normal SVW s (95,7%); 6 - altered SVW s (4,3%). The prevalence of well-formed FAG is found in the 3 months to 5 years age group in the children with NNF. FAG was totally absent from the age of 10 years. When comparing the three sleep graphielements, it was observed that SVW and SS were predominant in children with NNF. However, FAG absent was more prevalent in the ANF than in altered SS an SVW. The statistical analysis showed that there is a strong association of FAG absent, with isolated alteration, in ANF patients, in that the prevalence ratio was 6,60. The association becomes stronger when FAG absent + altered SS(s) is considered (RP= 6,68). Chi-square test, corrected by Yates technique, showed a highly significant relation for FAG ρ= 0,00000001, for error X of 5%, or else the 95% confidence interval (ρ<0,05). Thus, the FAG absent were more expressive in ANF patient than altered SS(s) and SVW(s). The association becomes stronger in order to establish a prognostic relation, when the FAG is combined with the SS. The results os this study allow us to affirm that the FAG, when absent at ages ranging from 3 months to 5 years , is an indication of neurological compromise. FAG is an age-dependent EEG parameter and incorporated systematically, in the interpretation criteria of the EEG of children s sleep, not only in the maturational point of view, but also neurological disturbances with encephalic compromise
Resumo:
This study aimed to determine the influence of strength training (ST), in three weekly sessions over ten weeks, on cardiovascular parameters and anthropometric measurements. It is a before and after intervention trial, with a sample composed of 30 individuals. Participants were adults aged between 18 and 40 years, from both sexes and sedentary for at least three months previously. Tests were computed ergospirometry, CRP, PWV and body composition (dependent variables) before and after the experiment. Independent variables, age and sex, were considered in order to determine their influence on the dependent variablesevaluatedend. By comparing the initial cardiovascular parameters with those obtained after intervention in patients undergoing the ST proposed (a Student s t-test was conducted within each group for samples matched to parameters with normal distribution, while the Wilcoxin was applied for those without), there was no significant difference in PWV(p =0469) or PCR(p =0.247), but there was an increase in anaerobic threshold(AT) (p=0.004) and Maximal Oxygen Uptake(VO2max) (p =0.052). In regard to anthropometric measures, individuals significantly reduced their body fat percentage (p<0.001) and fat mass (p<0,001), as well as increasing lean mass (p<0.001). However, no changes were recorded in the waist-to-hip ratio (WHR) (p= 0.777), body mass (p=0.226) or body mass index (BMI) (p =0.212). Findings of this study lead us to believe that the proposed ST, and did not increase the VOP or PCR improves cardiorespiratory capacity and body composition. Devotees of this training can therefore safely enjoy all its benefits without risk to the cardiovascular system
Resumo:
In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
Reinforcement learning is a machine learning technique that, although finding a large number of applications, maybe is yet to reach its full potential. One of the inadequately tested possibilities is the use of reinforcement learning in combination with other methods for the solution of pattern classification problems. It is well documented in the literature the problems that support vector machine ensembles face in terms of generalization capacity. Algorithms such as Adaboost do not deal appropriately with the imbalances that arise in those situations. Several alternatives have been proposed, with varying degrees of success. This dissertation presents a new approach to building committees of support vector machines. The presented algorithm combines Adaboost algorithm with a layer of reinforcement learning to adjust committee parameters in order to avoid that imbalances on the committee components affect the generalization performance of the final hypothesis. Comparisons were made with ensembles using and not using the reinforcement learning layer, testing benchmark data sets widely known in area of pattern classification
Resumo:
The main goal of the present work is related to the dynamics of the steady state, incompressible, laminar flow with heat transfer, of an electrically conducting and Newtonian fluid inside a flat parallel-plate channel under the action of an external and uniform magnetic field. For solution of the governing equations, written in the parabolic boundary layer and stream-function formulation, it was employed the hybrid, numericalanalytical, approach known as Generalized Integral Transform Technique (GITT). The flow is sustained by a pressure gradient and the magnetic field is applied in the direction normal to the flow and is assumed that normal magnetic field is kept uniform, remaining larger than any other fields generated in other directions. In order to evaluate the influence of the applied magnetic field on both entrance regions, thermal and hydrodynamic, for this forced convection problem, as well as for validating purposes of the adopted solution methodology, two kinds of channel entry conditions for the velocity field were used: an uniform and an non-MHD parabolic profile. On the other hand, for the thermal problem only an uniform temperature profile at the channel inlet was employed as boundary condition. Along the channel wall, plates are maintained at constant temperature, either equal to or different from each other. Results for the velocity and temperature fields as well as for the main related potentials are produced and compared, for validation purposes, to results reported on literature as function of the main dimensionless governing parameters as Reynolds and Hartman numbers, for typical situations. Finally, in order to illustrate the consistency of the integral transform method, convergence analyses are also effectuated and presented
Resumo:
The petroleum industry, in consequence of an intense activity of exploration and production, is responsible by great part of the generation of residues, which are considered toxic and pollutants to the environment. Among these, the oil sludge is found produced during the production, transportation and refine phases. This work had the purpose to develop a process to recovery the oil present in oil sludge, in order to use the recovered oil as fuel or return it to the refining plant. From the preliminary tests, were identified the most important independent variables, like: temperature, contact time, solvents and acid volumes. Initially, a series of parameters to characterize the oil sludge was determined to characterize its. A special extractor was projected to work with oily waste. Two experimental designs were applied: fractional factorial and Doehlert. The tests were carried out in batch process to the conditions of the experimental designs applied. The efficiency obtained in the oil extraction process was 70%, in average. Oil sludge is composed of 36,2% of oil, 16,8% of ash, 40% of water and 7% of volatile constituents. However, the statistical analysis showed that the quadratic model was not well fitted to the process with a relative low determination coefficient (60,6%). This occurred due to the complexity of the oil sludge. To obtain a model able to represent the experiments, the mathematical model was used, the so called artificial neural networks (RNA), which was generated, initially, with 2, 4, 5, 6, 7 and 8 neurons in the hidden layer, 64 experimental results and 10000 presentations (interactions). Lesser dispersions were verified between the experimental and calculated values using 4 neurons, regarding the proportion of experimental points and estimated parameters. The analysis of the average deviations of the test divided by the respective training showed up that 2150 presentations resulted in the best value parameters. For the new model, the determination coefficient was 87,5%, which is quite satisfactory for the studied system
Resumo:
Environmental sustainability has become one of the topics of greatest interest in industry, mainly due to effluent generation. Phenols are found in many industries effluents, these industries might be refineries, coal processing, pharmaceutical, plastics, paints and paper and pulp industries. Because phenolic compounds are toxic to humans and aquatic organisms, Federal Resolution CONAMA No. 430 of 13.05.2011 limits the maximum content of phenols, in 0.5 mg.L-1, for release in freshwater bodies. In the effluents treatment, the liquid-liquid extraction process is the most economical for the phenol recovery, because consumes little energy, but in most cases implements an organic solvent, and the use of it can cause some environmental problems due to the high toxicity of this compound. Because of this, exists a need for new methodologies, which aims to replace these solvents for biodegradable ones. Some literature studies demonstrate the feasibility of phenolic compounds removing from aqueous effluents, by biodegradable solvents. In this extraction kind called "Cloud Point Extraction" is used a nonionic surfactant as extracting agent of phenolic compounds. In order to optimize the phenol extraction process, this paper studies the mathematical modeling and optimization of extraction parameters and investigates the effect of the independent variables in the process. A 32 full factorial design has been done with operating temperature and surfactant concentration as independent variables and, parameters extraction: Volumetric fraction of coacervate phase, surfactant and residual concentration of phenol in dilute phase after separation phase and phenol extraction efficiency, as dependent variables. To achieve the objectives presented before, the work was carried out in five steps: (i) selection of some literature data, (ii) use of Box-Behnken model to find out mathematical models that describes the process of phenol extraction, (iii) Data analysis were performed using STATISTICA 7.0 and the analysis of variance was used to assess the model significance and prediction (iv) models optimization using the response surface method (v) Mathematical models validation using additional measures, from samples different from the ones used to construct the model. The results showed that the mathematical models found are able to calculate the effect of the surfactant concentration and the operating temperature in each extraction parameter studied, respecting the boundaries used. The models optimization allowed the achievement of consistent and applicable results in a simple and quick way leading to high efficiency in process operation.
Resumo:
In this research, the drying process of acerola waste was investigated by using a spouted bed drier. The process was conducted using high density polyethylene inert particles with the objective of producing an ascorbic acid-rich final product. The fruit waste was ground and used to prepare different water-maltodextrin suspensions. Initially, fluidynamical experiments were conducted in order to evaluate the feeding effect on the spouted bed drier fluidynamics behavior. The experimental planning 23 + 3 was used to investigate the effect of the following variables: solids concentration, drying air temperature, intermittence time, production efficiency, solids retention and product losses by elutriation of fine particles on drier walls. The effect of selected independent variables on the drier stability was also evaluated based on a parameter defined as the ratio between the feed suspension volume and the total inert particles volume. Finally, the powder quality was verified in experiments with fixed feed flow and varying air drying temperature, drying air velocity and intermittence time. It was observed that the suspension interferes in the spouted bed drier fluidynamics behavior, and higher air flow is necessary to stabilize the drier. The suspension also promotes the expansion of the spouted bed diameter, decreases the solid circulation and favors the air distribution at the flush area. All variables interfere in the spouted bed performance, and the solids concentration has a major effect on the material retention and losses. The intermittence time also has great effect on the stability and material retention. When it comes to production efficiency, the main effect observed was the drying air temperature. First order models were well adjusted to retention and losses data. The acerola powder presented ascorbic acid levels around 600 to 700 mg/100g. Similar moisture and ascorbic acid levels were obtained for powders obtained by spouted bed and spray drier. However, the powder production efficiency of the spray drier was lower when compared to spouted bed drier. When it comes to energetic analysis, the spray drier process was superior. The results obtained for spouted bed drier are promising and highly dependent on the operational parameters chosen, but in general, it is inferred that this drying process is adequate for paste and suspension drying
Resumo:
We present a study of nanostructured magnetic multilayer systems in order to syn- thesize and analyze the properties of periodic and quasiperiodic structures. This work evolved from the deployment and improvement of the sputtering technique in our labora- tories, through development of a methodology to synthesize single crystal ultrathin Fe (100) films, to the final goal of growing periodic and quasiperiodic Fe/Cr multilayers and investi- gating bilinear and biquadratic exchange coupling between ferromagnetic layer dependence for each generation. Initially we systematically studied the related effects between deposition parameters and the magnetic properties of ultrathin Fe films, grown by DC magnetron sput- tering on MgO(100) substrates. We modified deposition temperature and film thickness, in order to improve production and reproduction of nanostructured monocrystalline Fe films. For this set of samples we measured MOKE, FMR, AFM and XPS, with the aim of investi- gating their magnocrystalline and structural properties. From the magnetic viewpoint, the MOKE and FMR results showed an increase in magnetocrystalline anisotropy due to in- creased temperature. AFM measurements provided information about thickness and surface roughness, whereas XPS results were used to analyze film purity. The best set of parame- ters was used in the next stage: investigation of the structural effect on magnetic multilayer properties. In this stage multilayers composed of interspersed Fe and Cr films are deposited, following the Fibonacci periodic and quasiperiodic growth sequence on MgO (100) substrates. The behavior of MOKE and FMR curves exhibit bilinear and biquadratic exchange coupling between the ferromagnetic layers. By computationally adjusting magnetization curves, it was possible to determine the nature and intensity of the interaction between adjacent Fe layers. After finding the global minimum of magnetic energy, we used the equilibrium an- gles to obtain magnetization and magnetoresistance curves. The results observed over the course of this study demonstrate the efficiency and versatility of the sputtering technique in the synthesis of ultrathin films and high-quality multilayers. This allows the deposition of magnetic nanostructures with well-defined magnetization and magnetoresistance parameters and possible technological applications