896 resultados para Modelagem matemática


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present work describes the use of a mathematical tool to solve problems arising from control theory, including the identification, analysis of the phase portrait and stability, as well as the temporal evolution of the plant s current induction motor. The system identification is an area of mathematical modeling that has as its objective the study of techniques which can determine a dynamic model in representing a real system. The tool used in the identification and analysis of nonlinear dynamical system is the Radial Basis Function (RBF). The process or plant that is used has a mathematical model unknown, but belongs to a particular class that contains an internal dynamics that can be modeled.Will be presented as contributions to the analysis of asymptotic stability of the RBF. The identification using radial basis function is demonstrated through computer simulations from a real data set obtained from the plant

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The petroleum industry, in consequence of an intense activity of exploration and production, is responsible by great part of the generation of residues, which are considered toxic and pollutants to the environment. Among these, the oil sludge is found produced during the production, transportation and refine phases. This work had the purpose to develop a process to recovery the oil present in oil sludge, in order to use the recovered oil as fuel or return it to the refining plant. From the preliminary tests, were identified the most important independent variables, like: temperature, contact time, solvents and acid volumes. Initially, a series of parameters to characterize the oil sludge was determined to characterize its. A special extractor was projected to work with oily waste. Two experimental designs were applied: fractional factorial and Doehlert. The tests were carried out in batch process to the conditions of the experimental designs applied. The efficiency obtained in the oil extraction process was 70%, in average. Oil sludge is composed of 36,2% of oil, 16,8% of ash, 40% of water and 7% of volatile constituents. However, the statistical analysis showed that the quadratic model was not well fitted to the process with a relative low determination coefficient (60,6%). This occurred due to the complexity of the oil sludge. To obtain a model able to represent the experiments, the mathematical model was used, the so called artificial neural networks (RNA), which was generated, initially, with 2, 4, 5, 6, 7 and 8 neurons in the hidden layer, 64 experimental results and 10000 presentations (interactions). Lesser dispersions were verified between the experimental and calculated values using 4 neurons, regarding the proportion of experimental points and estimated parameters. The analysis of the average deviations of the test divided by the respective training showed up that 2150 presentations resulted in the best value parameters. For the new model, the determination coefficient was 87,5%, which is quite satisfactory for the studied system

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With water pollution increment at the last years, so many progresses in researches about treatment of contaminated waters have been developed. In wastewaters containing highly toxic organic compounds, which the biological treatment cannot be applied, the Advanced Oxidation Processes (AOP) is an alternative for degradation of nonbiodegradable and toxic organic substances, because theses processes are generation of hydroxyl radical based on, a highly reactivate substance, with ability to degradate practically all classes of organic compounds. In general, the AOP request use of special ultraviolet (UV) lamps into the reactors. These lamps present a high electric power demand, consisting one of the largest problems for the application of these processes in industrial scale. This work involves the development of a new photochemistry reactor composed of 12 low cost black light fluorescent lamps (SYLVANIA, black light, 40 W) as UV radiation source. The studied process was the photo-Fenton system, a combination of ferrous ions, hydrogen peroxide, and UV radiation, it has been employed for the degradation of a synthetic wastewater containing phenol as pollutant model, one of the main pollutants in the petroleum industry. Preliminary experiments were carrier on to estimate operational conditions of the reactor, besides the effects of the intensity of radiation source and lamp distribution into the reactor. Samples were collected during the experiments and analyzed for determining to dissolved organic carbon (DOC) content, using a TOC analyzer Shimadzu 5000A. The High Performance Liquid Chromatography (HPLC) was also used for identification of the cathecol and hydroquinone formed during the degradation process of the phenol. The actinometry indicated 9,06⋅1018 foton⋅s-1 of photons flow, for 12 actived lamps. A factorial experimental design was elaborated which it was possible to evaluate the influence of the reactants concentration (Fe2+ and H2O2) and to determine the most favorable experimental conditions ([Fe2+] = 1,6 mM and [H2O2] = 150,5 mM). It was verified the increase of ferrous ions concentration is favorable to process until reaching a limit when the increase of ferrous ions presents a negative effect. The H2O2 exhibited a positive effect, however, in high concentrations, reaching a maximum ratio degradation. The mathematical modeling of the process was accomplished using the artificial neural network technique

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Environmental sustainability has become one of the topics of greatest interest in industry, mainly due to effluent generation. Phenols are found in many industries effluents, these industries might be refineries, coal processing, pharmaceutical, plastics, paints and paper and pulp industries. Because phenolic compounds are toxic to humans and aquatic organisms, Federal Resolution CONAMA No. 430 of 13.05.2011 limits the maximum content of phenols, in 0.5 mg.L-1, for release in freshwater bodies. In the effluents treatment, the liquid-liquid extraction process is the most economical for the phenol recovery, because consumes little energy, but in most cases implements an organic solvent, and the use of it can cause some environmental problems due to the high toxicity of this compound. Because of this, exists a need for new methodologies, which aims to replace these solvents for biodegradable ones. Some literature studies demonstrate the feasibility of phenolic compounds removing from aqueous effluents, by biodegradable solvents. In this extraction kind called "Cloud Point Extraction" is used a nonionic surfactant as extracting agent of phenolic compounds. In order to optimize the phenol extraction process, this paper studies the mathematical modeling and optimization of extraction parameters and investigates the effect of the independent variables in the process. A 32 full factorial design has been done with operating temperature and surfactant concentration as independent variables and, parameters extraction: Volumetric fraction of coacervate phase, surfactant and residual concentration of phenol in dilute phase after separation phase and phenol extraction efficiency, as dependent variables. To achieve the objectives presented before, the work was carried out in five steps: (i) selection of some literature data, (ii) use of Box-Behnken model to find out mathematical models that describes the process of phenol extraction, (iii) Data analysis were performed using STATISTICA 7.0 and the analysis of variance was used to assess the model significance and prediction (iv) models optimization using the response surface method (v) Mathematical models validation using additional measures, from samples different from the ones used to construct the model. The results showed that the mathematical models found are able to calculate the effect of the surfactant concentration and the operating temperature in each extraction parameter studied, respecting the boundaries used. The models optimization allowed the achievement of consistent and applicable results in a simple and quick way leading to high efficiency in process operation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The oil industry, experiencing a great economic and environmental impact, has increasingly invested in researches aiming a more satisfactory treatment of its largest effluent, i.e., produced water. These are mostly discarded at sea, without reuse and after a basic treatment. Such effluent contains a range of organic compounds with high toxicity and are difficult to remove, such as polycyclic aromatic hydrocarbons, salts, heavy metals, etc.. The main objective of this work was to study the solar distillation of produced water pre-treated to remove salts and other contaminants trough of a hybrid system with a pre-heater. This developed apparatus was called solar system, which consists of a solar heater and a conventional distillation solar still. The first device consisted of a water tank, a solar flat plate collector and a thermal reservoir. The solar distillator is of simple effect, with 1m2 of flat area and 20° of inclination. This dissertation was divided in five steps: measurements in the solar system, i.e. temperatures and distillate flow rate and weather data; modeling and simulation of the system; study of vapor-liquid equilibrium of the synthetic wastewater by the aqueous solution of p-xylene; physical and chemical analyses of samples of the feed, distillate and residue, as well as climatology pertinent variables of Natal-RN. The solar system was tested separately, with the supply water, aqueous NaCl and synthetic oil produced water. Temperature measurements were taken every minute of the thermal reservoir, water tank and distillator (liquid and vapor phases). Data of solar radiation and rainfall were obtained from INPE (National Institute for Space Research). The solar pre-heater demonstrated to be effective for the liquid systems tested. The reservoir fluid had an average temperature of 58°C, which enabled the feed to be pre-heated in the distillator. The temperature profile in the solar distillator showed a similar behavior to daily solar radiation, with temperatures near 70°C. The distillation had an average yield of 2.4 L /day, i.e., an efficiency of 27.2%. Mathematical modeling aided the identification of the most important variables and parameters in the solar system. The study of the vapor-liquid equilibrium from Total Organic Carbon (TOC) analysis indicated heteroazeotropia and the vapor phase resulted more concentrated in p-xylene. The physical-chemical analysis of pH, conductivity, Total Dissolved Solids (TDS), chlorides, cations (including heavy metals) and anions, the effluent distillate showed satisfactory results, which presents a potential for reuse. The climatological study indicates the region of Natal-RN as favorable to the operation of solar systems, but the use of auxiliary heating during periods of higher rainfall and cloud cover is also recommended

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eutrophication has been listed as one of the main problems of water pollution on a global level. In the Brazilian semi-arid areas this problem takes even greater proportions due to characteristical water scarcity of the region. It is extremely important to the predictive eutrophication models development and to the reservoirs management in the semi-arid region, studies that promotes understanding of the mechanisms responsible for the expansion and control of algae blooms, essential for improving the water quality of these environments. The present study had as its main aims, evaluate the temporal pattern of trophic state, considering the influence of nutrients (N and P) and the light availability in the water column in the development of phytoplankton biomass, and perform the mathematical modelling of changes in phosphorus and chlorophyll a concentrations in the Cruzeta man-made lake located on Seridó, a typical semi-arid region of Rio Grande do Norte. To this, a fortnightly monitoring was performed in the reservoir in 05 stations over the months of March 2007 to May 2008. Were measured the concentrations of total phosphorus, total organic nitrogen, chlorophyll a, total, fixed and volatile suspended solids, as well as the measure of transparency (Secchi) and the profiles of photosynthetic active radiation (PAR), temperature, pH, dissolved oxygen and electrical conductivity in the water column. Measurements of vertical profiles have shown some periods of chemical and thermal stratification, especially in the rainy season, due to increased water column depth, however, the reservoir can be classified as warm polimitic. During the study period the reservoir was characterized as eutrophic considering the concentrations of phosphorus and most of the time as mesotrophic, based on the concentrations of chlorophyll a, according to the Thornton & Rast (1993) classification. The N:P relations suggest N limitation, conversely, significant linear relationship between the algae biomass and nutrients (N and P) were not observed in our study. However, a relevant event was the negative and significant correlation presented by Kt and chlorophyll a (r ² = 0.83) at the end of the drought of 2007 and the rainy season of 2008, and the algal biomass collapse observed at the end of the drought season (Dec/07). The equation used to simulate the change in the total phosphorus was not satisfactory, being necessary inclusion of parameters able to increase the power of the model prediction. The chlorophyll a simulation presented a good adjustment trend, however there is a need to check the calibrated model parameters and subsequent equation validation

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Urban centers in Pitimbu Watershed use significant groundwater sources for public supply. Therefore, studies in Dunas Barreiras aquifer are relevant to expand knowledge about it and help manage water resources in the region. An essential tool for this management is the numerical modeling of groundwater flow. In this work, we developed a groundwater flow model for Pitimbu Watershed, using the Visual Modflow, version 2.7.1., which uses finite difference method for solving the govern equation of the dynamics of groundwater flow. We carried out the numerical simulation of steady-state model for the entire region of the basin. The model was built in the geographical, geomorphological and hydrogeological study of the area, which defined the boundary conditions and the parameters required for the numerical calculation. Owing to unavailability of current data based on monitoring of the aquifer it was not possible to calibrate the model. However, the simulation results showed that the overall water balance approached zero, therefore satisfying the equation for the three-dimensional behavior of the head water in steady state. Variations in aquifer recharge data were made to verify the impact of this contribution on the water balance of the system, especially in the scenario in which recharge due to drains and sinks was removed. According to the results generated by Visual Modflow occurred significantly hydraulic head lowering, ranging from 16,4 to 82 feet of drawdown. With the results obtained, it can be said that modeling is performed as a valid tool for the management of water resources in Pitimbu River Basin, and to support new studies

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we study the survival cure rate model proposed by Yakovlev et al. (1993), based on a competing risks structure concurring to cause the event of interest, and the approach proposed by Chen et al. (1999), where covariates are introduced to model the risk amount. We focus the measurement error covariates topics, considering the use of corrected score method in order to obtain consistent estimators. A simulation study is done to evaluate the behavior of the estimators obtained by this method for finite samples. The simulation aims to identify not only the impact on the regression coefficients of the covariates measured with error (Mizoi et al. 2007) but also on the coefficients of covariates measured without error. We also verify the adequacy of the piecewise exponential distribution to the cure rate model with measurement error. At the end, model applications involving real data are made

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we presented an exhibition of the mathematical theory of orthogonal compact support wavelets in the context of multiresoluction analysis. These are particularly attractive wavelets because they lead to a stable and very efficient algorithm, that is Fast Transform Wavelet (FWT). One of our objectives is to develop efficient algorithms for calculating the coefficients wavelet (FWT) through the pyramid algorithm of Mallat and to discuss his connection with filters Banks. We also studied the concept of multiresoluction analysis, that is the context in that wavelets can be understood and built naturally, taking an important step in the change from the Mathematical universe (Continuous Domain) for the Universe of the representation (Discret Domain)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work is to make a brief discussion of methods to estimate the parameters of the Generalized Pareto distribution (GPD). Being addressed the following techniques: Moments (moments), Maximum Likelihood (MLE), Biased Probability Weighted Moments (PWMB), Unbiased Probability Weighted Moments (PWMU), Mean Power Density Divergence (MDPD), Median (MED), Pickands (PICKANDS), Maximum Penalized Likelihood (MPLE), Maximum Goodness-of-fit (MGF) and the Maximum Entropy (POME) technique, the focus of this manuscript. By way of illustration adjustments were made for the Generalized Pareto distribution, for a sequence of earthquakes intraplacas which occurred in the city of João Câmara in the northeastern region of Brazil, which was monitored continuously for two years (1987 and 1988). It was found that the MLE and POME were the most efficient methods, giving them basically mean squared errors. Based on the threshold of 1.5 degrees was estimated the seismic risk for the city, and estimated the level of return to earthquakes of intensity 1.5°, 2.0°, 2.5°, 3.0° and the most intense earthquake never registered in the city, which occurred in November 1986 with magnitude of about 5.2º

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we present the principal fractals, their caracteristics, properties abd their classification, comparing them to Euclidean Geometry Elements. We show the importance of the Fractal Geometry in the analysis of several elements of our society. We emphasize the importance of an appropriate definition of dimension to these objects, because the definition we presently know doesn t see a satisfactory one. As an instrument to obtain these dimentions we present the Method to count boxes, of Hausdorff- Besicovich and the Scale Method. We also study the Percolation Process in the square lattice, comparing it to percolation in the multifractal subject Qmf, where we observe som differences between these two process. We analize the histogram grafic of the percolating lattices versus the site occupation probability p, and other numerical simulations. And finaly, we show that we can estimate the fractal dimension of the percolation cluster and that the percolatin in a multifractal suport is in the same universality class as standard percolation. We observe that the area of the blocks of Qmf is variable, pc is a function of p which is related to the anisotropy of Qmf