123 resultados para Modelagem de experimentos
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In this work we analyze the skin bioimpedance statistical distribution. We focus on the study of two distinct samples: the statistics of impedance of several points in the skin of a single individual and the statistics over a population (many individuals) but in a single skin point. The impedance data was obtained from the literature (Pearson, 2007). Using the Shapiro-Wilk test and the assymmetry test we conclude that the impedance of a population is better described by an assymetric and non-normal distribution. On the other side, the data concerning the individual impedance seems to follow a normal distribution. We have performed a goodnes of fitting test and the better distribution to fit the data of a population is the log-normal distribution. It is interesting to note that our result for skin impedance is in simtony with body impedance from the literature of electrical engeneering. Our results have an impact over the statistical planning and modelling of skin impedance experiments. Special attention we should drive to the treatment of outliers in this kind of dataset. The results of this work are important in the general discussion of low impedance of points of acupuncture and also in the problem of skin biopotentials used in equipments like the Electrodermal Screen Tests.
Resumo:
The fluorescent proteins are an essential tool in many fields of biology, since they allow us to watch the development of structures and dynamic processes of cells in living tissue, with the aid of fluorescence microscopy. Optogenectics is another technique that is currently widely used in Neuroscience. In general, this technique allows to activate/deactivate neurons with the radiation of certain wavelengths on the cells that have ion channels sensitive to light, at the same time that can be used with fluorescent proteins. This dissertation has two main objectives. Initially, we study the interaction of light radiation and mice brain tissue to be applied in optogenetic experiments. In this step, we model absorption and scattering effects using mice brain tissue characteristics and Kubelka-Munk theory, for specific wavelengths, as a function of light penetration depth (distance) within the tissue. Furthermore, we model temperature variations using the finite element method to solve Pennes’ bioheat equation, with the aid of COMSOL Multiphysics Modeling Software 4.4, where we simulate protocols of light stimulation tipically used in optogenetics. Subsequently, we develop some computational algorithms to reduce the exposure of neuron cells to the light radiation necessary for the visualization of their emitted fluorescence. At this stage, we describe the image processing techniques developed to be used in fluorescence microscopy to reduce the exposure of the brain samples to continuous light, which is responsible for fluorochrome excitation. The developed techniques are able to track, in real time, a region of interest (ROI) and replace the fluorescence emitted by the cells by a virtual mask, as a result of the overlay of the tracked ROI and the fluorescence information previously stored, preserving cell location, independently of the time exposure to fluorescent light. In summary, this dissertation intends to investigate and describe the effects of light radiation in brain tissue, within the context of Optogenetics, in addition to providing a computational tool to be used in fluorescence microscopy experiments to reduce image bleaching and photodamage due to the intense exposure of fluorescent cells to light radiation.
Resumo:
SOUZA, Anderson A.S. ; MEDEIROS, Adelardo A. D. ; GONÇALVES, Luiz Marcos G. . Algorítmo de mapeamento usando modelagem probabilística. In: SIMPOSIO BRASILEIRO DE AUTOMAÇÃO INTELIGENTE, 2007, Natal. Anais... Natal, 2007.
Resumo:
In this work, we propose a probabilistic mapping method with the mapped environment represented through a modified occupancy grid. The main idea of the proposed method is to allow a mobile robot to construct in a systematic and incremental way the geometry of the underlying space, obtaining at the end a complete environment map. As a consequence, the robot can move in the environment in a safe way, based on a confidence value of data obtained from its perceptive system. The map is represented in a coherent way, according to its sensory data, being these noisy or not, that comes from exterior and proprioceptive sensors of the robot. Characteristic noise incorporated in the data from these sensors are treated by probabilistic modeling in such a way that their effects can be visible in the final result of the mapping process. The results of performed experiments indicate the viability of the methodology and its applicability in the area of autonomous mobile robotics, thus being an contribution to the field
Resumo:
The control, automation and optimization areas help to improve the processes used by industry. They contribute to a fast production line, improving the products quality and reducing the manufacturing costs. Didatic plants are good tools for research in these areas, providing a direct contact with some industrial equipaments. Given these capabilities, the main goal of this work is to model and control a didactic plant, which is a level and flow process control system with an industrial instrumentation. With a model it is possible to build a simulator for the plant that allows studies about its behaviour, without any of the real processes operational costs, like experiments with controllers. They can be tested several times before its application in a real process. Among the several types of controllers, it was used adaptive controllers, mainly the Direct Self-Tuning Regulators (DSTR) with Integral Action and the Gain Scheduling (GS). The DSTR was based on Pole-Placement design and use the Recursive Least Square to calculate the controller parameters. The characteristics of an adaptive system was very worth to guarantee a good performance when the controller was applied to the plant
Resumo:
The nonionic surfactants are composed of substances whose molecules in solution, does not ionize. The solubility of these surfactants in water due to the presence of functional groups that have strong affinity for water. When these surfactants are heated is the formation of two liquid phases, evidenced by the phenomenon of turbidity. This study was aimed to determine the experimental temperature and turbidity nonilfenolpoliethoxyled subsequently perform a thermodynamic modeling, considering the models of Flory-Huggins and the empirical solid-liquid equilibrium (SLE). The method used for determining the turbidity point was the visual method (Inoue et al., 2008). The experimental methodology consisted of preparing synthetic solutions of 0,25%, 0,5%, 1%, 2%, 3%, 4%, 5%, 6%, 7%, 8%, 9%, 10%, 12,5%, 15%, 17% and 20% by weight of surfactant. The nonionic surfactants used according to their degree of ethoxylation (9.5, 10, 11, 12 and 13). During the experiments the solutions were homogenized and the bath temperature was gradually increased while the turbidity of the solution temperature was checked visually Inoue et al. (2003). These temperature data of turbidity were used to feed the models evaluated and obtain thermodynamic parameters for systems of surfactants nonilfenolpoliethoxyled. Then the models can be used in phase separation processes, facilitating the extraction of organic solvents, therefore serve as quantitative and qualitative parameters. It was observed that the solidliquid equilibrium model (ESL) was best represented the experimental data.
Resumo:
In this study were projected, built and tested an electric solar dryer consisting of a solar collector, a drying chamber, an exhaust fan and a fan to promote forced hot air convection. Banana drying experiments were also carried out in a static column dryer to model the drying and to obtain parameters that can be used as a first approximation in the modeling of an electric solar dryer, depending on the similarity of the experimental conditions between the two drying systems. From the banana drying experiments conducted in the static column dryer, we obtained food weight data as a function of aqueous concentration and temperature. Simplified mathematical models of the banana drying were made, based on Fick s and Fourier s second equations, which were tested with the experimental data. We determined and/or modeled parameters such as banana moisture content, density, thin layer drying curves, equilibrium moisture content, molecular diffusivity of the water in banana DAB, external mass transfer coefficient kM, specific heat Cp, thermal conductivity k, latent heat of water evaporation in the food Lfood, time to heat food, and minimum energy and power required to heat the food and evaporate the water. When we considered the shrinkage of radius R of a banana, the calculated values of DAB and kM generally better represent the phenomenon of water diffusion in a solid. The latent heat of water evaporation in the food Lfood calculated by modeling is higher than the latent heat of pure water evaporation Lwater. The values calculated for DAB and KM that best represent the drying were obtained with the analytical model of the present paper. These values had good agreement with those assessed with a numeric model described in the literature, in which convective boundary condition and food shrinkage are considered. Using parameters such as Cp, DAB, k, kM and Lfood, one can elaborate the preliminary dryer project and calculate the economy using only solar energy rather than using solar energy along with electrical energy
Resumo:
This work aims to study the drying of cashew-nut pulp with different lay-out of dryers using conventional and solar energy. It concerns with the use of exceeding of the regional raw material and the suitable knowledge for the applicability of the drying systems as pathway for food conservation. Besides, it used renewable sources as solar energy to dry these agroindustrial products. Runs were carried out using a conventional tray-dryer with temperature, air velocity control and cashew slice thickness of 55°C, 65°C, 75°C; 3.0; 4.5, 6.0 m s-1; 1.0; 1.5 and 2.0 cm, respectively, in order to compare the studied systems. To evaluate the conventional tray-dryer, it was used a diffusional model of 2nd Fick´s law, where the drying curves were quite well fitted to an infinite flat plate design. For the drying runs where the room temperature had no control, it was developed a phenomenological-mathematical model for the solar dryer with indirect radiation under natural and forced convection based on material and energy balances of the system. Besides, it was carried out assays in the in natura as well as dehydrated, statistic analysis of the experimental drying data, sensorial analysis of the final dry product and a simplified economical analysis of the systems studied
Resumo:
Two-level factorial designs are widely used in industrial experimentation. However, many factors in such a design require a large number of runs to perform the experiment, and too many replications of the treatments may not be feasible, considering limitations of resources and of time, making it expensive. In these cases, unreplicated designs are used. But, with only one replicate, there is no internal estimate of experimental error to make judgments about the significance of the observed efects. One of the possible solutions for this problem is to use normal plots or half-normal plots of the efects. Many experimenters use the normal plot, while others prefer the half-normal plot and, often, for both cases, without justification. The controversy about the use of these two graphical techniques motivates this work, once there is no register of formal procedure or statistical test that indicates \which one is best". The choice between the two plots seems to be a subjective issue. The central objective of this master's thesis is, then, to perform an experimental comparative study of the normal plot and half-normal plot in the context of the analysis of the 2k unreplicated factorial experiments. This study involves the construction of simulated scenarios, in which the graphics performance to detect significant efects and to identify outliers is evaluated in order to verify the following questions: Can be a plot better than other? In which situations? What kind of information does a plot increase to the analysis of the experiment that might complement those provided by the other plot? What are the restrictions on the use of graphics? Herewith, this work intends to confront these two techniques; to examine them simultaneously in order to identify similarities, diferences or relationships that contribute to the construction of a theoretical reference to justify or to aid in the experimenter's decision about which of the two graphical techniques to use and the reason for this use. The simulation results show that the half-normal plot is better to assist in the judgement of the efects, while the normal plot is recommended to detect outliers in the data
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
The geological modeling allows, at laboratory scaling, the simulation of the geometric and kinematic evolution of geological structures. The importance of the knowledge of these structures grows when we consider their role in the creation of traps or conduits to oil and water. In the present work we simulated the formation of folds and faults in extensional environment, through physical and numerical modeling, using a sandbox apparatus and MOVE2010 software. The physical modeling of structures developed in the hangingwall of a listric fault, showed the formation of active and inactive axial zones. In consonance with the literature, we verified the formation of a rollover between these two axial zones. The crestal collapse of the anticline formed grabens, limited by secondary faults, perpendicular to the extension, with a curvilinear aspect. Adjacent to these faults we registered the formation of transversal folds, parallel to the extension, characterized by a syncline in the fault hangingwall. We also observed drag folds near the faults surfaces, these faults are parallel to the fault surface and presented an anticline in the footwall and a syncline hangingwall. To observe the influence of geometrical variations (dip and width) in the flat of a flat-ramp fault, we made two experimental series, being the first with the flat varying in dip and width and the second maintaining the flat variation in width but horizontal. These experiments developed secondary faults, perpendicular to the extension, that were grouped in three sets: i) antithetic faults with a curvilinear geometry and synthetic faults, with a more rectilinear geometry, both nucleated in the base of sedimentary pile. The normal antithetic faults can rotate, during the extension, presenting a pseudo-inverse kinematics. ii) Faults nucleated at the top of the sedimentary pile. The propagation of these faults is made through coalescence of segments, originating, sometimes, the formation of relay ramps. iii) Reverse faults, are nucleated in the flat-ramp interface. Comparing the two models we verified that the dip of the flat favors a differentiated nucleation of the faults at the two extremities of the mater fault. V These two flat-ramp models also generated an anticline-syncline pair, drag and transversal folds. The anticline was formed above the flat being sub-parallel to the master fault plane, while the syncline was formed in more distal areas of the fault. Due the geometrical variation of these two folds we can define three structural domains. Using the physical experiments as a template, we also made numerical modeling experiments, with flat-ramp faults presenting variation in the flat. Secondary antithetic, synthetic and reverse faults were generated in both models. The numerical modeling formed two folds, and anticline above the flat and a syncline further away of the master fault. The geometric variation of these two folds allowed the definition of three structural domains parallel to the extension. These data reinforce the physical models. The comparisons between natural data of a flat-ramp fault in the Potiguar basin with the data of physical and numerical simulations, showed that, in both cases, the variation of the geometry of the flat produces, variation in the hangingwall geometry
Resumo:
The physical structural modeling tool is being increasingly used in geology to provide information about the evolutionary stages (nucleation, growth) and geometry of geological structures at various scales. During the simulations of extensional tectonics, modeling provides a better understanding of fault geometry and evolution of the tectonic-stratigraphic architecture of rift basins. In this study a sandbox type apparatus was used to study the nucleation and development of basins influenced by previous structures within the basement, variably oriented as regards to the main extensional axis. Two types of experiments were conducted in order to: (i) simulate the individual (independent) development of half-grabens oriented orthogonal or oblique to the extension direction; (ii) simulate the simultaneous development of such half-grabens, orthogonal or oblique to the extension direction. In both cases the same materials (sand mixed with gypsum) were used and the same boundary conditions were maintained. The results were compared with a natural analogue represented by the Rio do Peixe Basin (one of the eocretaceous interior basins of Northeast Brazil). The obtained models allowed to observe the development of segmented border faults with listric geometry, often forming relay ramps, and the development of inner basins faults that affect only the basal strata, like the ones observed in the seismic sections of the natural analogue. The results confirm the importance of basement tectonic heritage in the geometry of rift depocenters
Resumo:
In the last decades, analogue modelling has been used in geology to improve the knowledge of how geological structures are nucleated, how they grow and what are the main important points in such processes. The use of this tool in the oil industry, to help seismic interpretations and mainly to search for structural traps contributed to disseminate the use of this tool in the literature. Nowadays, physical modelling has a large field of applications, since landslide to granite emplacement along shear zones. In this work, we deal with physical modelling to study the influence of mechanical stratifications in the nucleation and development of faults and fractures in a context of orthogonal and conjugated oblique basins. To simulate a mechanical stratigraphy we used different materials, with distinct physical proprieties, such as gypsum powder, glass beads, dry clay and quartz sand. Some experiments were run along with a PIV (Particle Image Velocimetry), an instrument that shows the movement of the particles to each deformation moment. Two series of experiments were studied: i) Series MO: We tested the development of normal faults in a context of an orthogonal (to the extension direction) basin. Experiments were run taking into account the change of materials and strata thickness. Some experiments were done with sintectonic sedimentation. We registered differences in the nucleation and growth of faults in layers with different rheological behavior. The gypsum powder layer behaves in a more competent mode, which generates a great number of high angle fractures. These fractures evolve to faults that exhibit a higher dip than when they cross less competent layers, like the one of quartz sand. This competent layer exhibits faulted blocks arranged in a typical domino-style. Cataclastic breccias developed along the faults affecting the competent layers and showed different evolutional history, depending on the deforming stratigraphic sequence; ii) Series MOS2: Normal faults were analyzed in conjugated sub-basins (oblique to the extension direction) developed in a sequence with and without rheological contrast. In experiments with rheological contrast, two important grabens developed along the faulted margins differing from the subbasins with mechanical stratigraphy. Both experiments developed oblique fault systems and, in the area of sub-basins intersection, faults traces became very curved.
Resumo:
Advanced Oxidation Processes (AOP) are techniques involving the formation of hydroxyl radical (HO•) with high organic matter oxidation rate. These processes application in industry have been increasing due to their capacity of degrading recalcitrant substances that cannot be completely removed by traditional processes of effluent treatment. In the present work, phenol degrading by photo-Fenton process based on addition of H2O2, Fe2+ and luminous radiation was studied. An experimental design was developed to analyze the effect of phenol, H2O2 and Fe2+ concentration on the fraction of total organic carbon (TOC) degraded. The experiments were performed in a batch photochemical parabolic reactor with 1.5 L of capacity. Samples of the reactional medium were collected at different reaction times and analyzed in a TOC measurement instrument from Shimadzu (TOC-VWP). The results showed a negative effect of phenol concentration and a positive effect of the two other variables in the TOC degraded fraction. A statistical analysis of the experimental design showed that the hydrogen peroxide concentration was the most influent variable in the TOC degraded fraction at 45 minutes and generated a model with R² = 0.82, which predicted the experimental data with low precision. The Visual Basic for Application (VBA) tool was used to generate a neural networks model and a photochemical database. The aforementioned model presented R² = 0.96 and precisely predicted the response data used for testing. The results found indicate the possible application of the developed tool for industry, mainly for its simplicity, low cost and easy access to the program.
Resumo:
Advanced Oxidation Processes (AOP) are techniques involving the formation of hydroxyl radical (HO•) with high organic matter oxidation rate. These processes application in industry have been increasing due to their capacity of degrading recalcitrant substances that cannot be completely removed by traditional processes of effluent treatment. In the present work, phenol degrading by photo-Fenton process based on addition of H2O2, Fe2+ and luminous radiation was studied. An experimental design was developed to analyze the effect of phenol, H2O2 and Fe2+ concentration on the fraction of total organic carbon (TOC) degraded. The experiments were performed in a batch photochemical parabolic reactor with 1.5 L of capacity. Samples of the reactional medium were collected at different reaction times and analyzed in a TOC measurement instrument from Shimadzu (TOC-VWP). The results showed a negative effect of phenol concentration and a positive effect of the two other variables in the TOC degraded fraction. A statistical analysis of the experimental design showed that the hydrogen peroxide concentration was the most influent variable in the TOC degraded fraction at 45 minutes and generated a model with R² = 0.82, which predicted the experimental data with low precision. The Visual Basic for Application (VBA) tool was used to generate a neural networks model and a photochemical database. The aforementioned model presented R² = 0.96 and precisely predicted the response data used for testing. The results found indicate the possible application of the developed tool for industry, mainly for its simplicity, low cost and easy access to the program.