17 resultados para Environmental objective function
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
This work deals with an on-line control strategy based on Robust Model Predictive Control (RMPC) technique applied in a real coupled tanks system. This process consists of two coupled tanks and a pump to feed the liquid to the system. The control objective (regulator problem) is to keep the tanks levels in the considered operation point even in the presence of disturbance. The RMPC is a technique that allows explicit incorporation of the plant uncertainty in the problem formulation. The goal is to design, at each time step, a state-feedback control law that minimizes a 'worst-case' infinite horizon objective function, subject to constraint in the control. The existence of a feedback control law satisfying the input constraints is reduced to a convex optimization over linear matrix inequalities (LMIs) problem. It is shown in this work that for the plant uncertainty described by the polytope, the feasible receding horizon state feedback control design is robustly stabilizing. The software implementation of the RMPC is made using Scilab, and its communication with Coupled Tanks Systems is done through the OLE for Process Control (OPC) industrial protocol
Resumo:
This work shows a study about the Generalized Predictive Controllers with Restrictions and their implementation in physical plants. Three types of restrictions will be discussed: restrictions in the variation rate of the signal control, restrictions in the amplitude of the signal control and restrictions in the amplitude of the Out signal (plant response). At the predictive control, the control law is obtained by the minimization of an objective function. To consider the restrictions, this minimization of the objective function is done by the use of a method to solve optimizing problems with restrictions. The chosen method was the Rosen Algorithm (based on the Gradient-projection). The physical plants in this study are two didactical systems of water level control. The first order one (a simple tank) and another of second order, which is formed by two tanks connected in cascade. The codes are implemented in C++ language and the communication with the system to be done through using a data acquisition panel offered by the system producer
Resumo:
This work performs an algorithmic study of optimization of a conformal radiotherapy plan treatment. Initially we show: an overview about cancer, radiotherapy and the physics of interaction of ionizing radiation with matery. A proposal for optimization of a plan of treatment in radiotherapy is developed in a systematic way. We show the paradigm of multicriteria problem, the concept of Pareto optimum and Pareto dominance. A generic optimization model for radioterapic treatment is proposed. We construct the input of the model, estimate the dose given by the radiation using the dose matrix, and show the objective function for the model. The complexity of optimization models in radiotherapy treatment is typically NP which justifyis the use of heuristic methods. We propose three distinct methods: MOGA, MOSA e MOTS. The project of these three metaheuristic procedures is shown. For each procedures follows: a brief motivation, the algorithm itself and the method for tuning its parameters. The three method are applied to a concrete case and we confront their performances. Finally it is analyzed for each method: the quality of the Pareto sets, some solutions and the respective Pareto curves
Resumo:
Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms
Resumo:
The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model
Resumo:
The gravity inversion method is a mathematic process that can be used to estimate the basement relief of a sedimentary basin. However, the inverse problem in potential-field methods has neither a unique nor a stable solution, so additional information (other than gravity measurements) must be supplied by the interpreter to transform this problem into a well-posed one. This dissertation presents the application of a gravity inversion method to estimate the basement relief of the onshore Potiguar Basin. The density contrast between sediments and basament is assumed to be known and constant. The proposed methodology consists of discretizing the sedimentary layer into a grid of rectangular juxtaposed prisms whose thicknesses correspond to the depth to basement which is the parameter to be estimated. To stabilize the inversion I introduce constraints in accordance with the known geologic information. The method minimizes an objective function of the model that requires not only the model to be smooth and close to the seismic-derived model, which is used as a reference model, but also to honor well-log constraints. The latter are introduced through the use of logarithmic barrier terms in the objective function. The inversion process was applied in order to simulate different phases during the exploration development of a basin. The methodology consisted in applying the gravity inversion in distinct scenarios: the first one used only gravity data and a plain reference model; the second scenario was divided in two cases, we incorporated either borehole logs information or seismic model into the process. Finally I incorporated the basement depth generated by seismic interpretation into the inversion as a reference model and imposed depth constraint from boreholes using the primal logarithmic barrier method. As a result, the estimation of the basement relief in every scenario has satisfactorily reproduced the basin framework, and the incorporation of the constraints led to improve depth basement definition. The joint use of surface gravity data, seismic imaging and borehole logging information makes the process more robust and allows an improvement in the estimate, providing a result closer to the actual basement relief. In addition, I would like to remark that the result obtained in the first scenario already has provided a very coherent basement relief when compared to the known basin framework. This is significant information, when comparing the differences in the costs and environment impact related to gravimetric and seismic surveys and also the well drillings
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
The modern industrial progress has been contaminating water with phenolic compounds. These are toxic and carcinogenic substances and it is essential to reduce its concentration in water to a tolerable one, determined by CONAMA, in order to protect the living organisms. In this context, this work focuses on the treatment and characterization of catalysts derived from the bio-coal, by-product of biomass pyrolysis (avelós and wood dust) as well as its evaluation in the phenol photocatalytic degradation reaction. Assays were carried out in a slurry bed reactor, which enables instantaneous measurements of temperature, pH and dissolved oxygen. The experiments were performed in the following operating conditions: temperature of 50 °C, oxygen flow equals to 410 mL min-1 , volume of reagent solution equals to 3.2 L, 400 W UV lamp, at 1 atm pressure, with a 2 hours run. The parameters evaluated were the pH (3.0, 6.9 and 10.7), initial concentration of commercial phenol (250, 500 and 1000 ppm), catalyst concentration (0, 1, 2, and 3 g L-1 ), nature of the catalyst (activated avelós carbon washed with dichloromethane, CAADCM, and CMADCM, activated dust wood carbon washed with dichloromethane). The results of XRF, XRD and BET confirmed the presence of iron and potassium in satisfactory amounts to the CAADCM catalyst and on a reduced amount to CMADCM catalyst, and also the surface area increase of the materials after a chemical and physical activation. The phenol degradation curves indicate that pH has a significant effect on the phenol conversion, showing better results for lowers pH. The optimum concentration of catalyst is observed equals to 1 g L-1 , and the increase of the initial phenol concentration exerts a negative influence in the reaction execution. It was also observed positive effect of the presence of iron and potassium in the catalyst structure: betters conversions were observed for tests conducted with the catalyst CAADCM compared to CMADCM catalyst under the same conditions. The higher conversion was achieved for the test carried out at acid pH (3.0) with an initial concentration of phenol at 250 ppm catalyst in the presence of CAADCM at 1 g L-1 . The liquid samples taken every 15 minutes were analyzed by liquid chromatography identifying and quantifying hydroquinone, p-benzoquinone, catechol and maleic acid. Finally, a reaction mechanism is proposed, cogitating the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. Applying the model of Langmuir-Hinshelwood along with a mass balance it was obtained a system of differential equations that were solved using the Runge-Kutta 4th order method associated with a optimization routine called SWARM (particle swarm) aiming to minimize the least square objective function for obtaining the kinetic and adsorption parameters. Related to the kinetic rate constant, it was obtained a magnitude of 10-3 for the phenol degradation, 10-4 to 10-2 for forming the acids, 10-6 to 10-9 for the mineralization of quinones (hydroquinone, p-benzoquinone and catechol), 10-3 to 10-2 for the mineralization of acids.
Resumo:
The objective of the work was to investigate, from the vision of travel agents, the importance of environmental practices as a decision factor in the purchase of a tourist package. For in such a way, it was established as target population, the travel agencies and tourism linked to he Brazilian Association of Travel agencies ABAV, hearing the Brazilian travel agents that exerted the function in Natal, city in 2005. The election of the sample was accomplished using the simple random sampling technique. The amount of agents effectively searched was of 150 agents being distributed 150 questionnaires, with closed and opened questions, applied during the month of November in 2005. Results showed great variability of interviewed answers in that if it relates for sale of package tourist where the customer demonstrates enviromental concern with the environmental quality. Through multiple regression analyses, it was environmental concern with the environmental quality of the place and the perception of the practical importance of the existence environmental practices in the place as important factor in the decision of tourist package purchase
Resumo:
In this work is the addition of a metallic ion, of the metal Manganese, in a clay of Rio Grande do Norte state for structural ceramics use, the objective this study was to assess the evolution of ceramic properties. The clay was characterized by Chemical and Thermal analysis and Xray difraction. The metallic ion was added in the clay as aqueous solutions at concentrations of 100, 150 and 200 mg / L. The molded by extrusion and the burned were temperatures at 850, 950, 1050 and 1150 º C. Was made Chemical Analysis and investigated the following parameters environmental and ceramic: Solubility, Colour, Linear Retraction (%), Water Absorption (%), Gresification Curves, Apparent Porosity (%), Apparent Specific Mass (g/cm3) and Flexion Rupture Module (kgf/cm2). The results showed that increasing the concentration of metallic ion, properties such as Apparent Porosity (%), Water Absorption (%) decreases and the Flexion Rupture Module (kgf/cm2) increases with increasing temperature independent of the concentration of the ion. The gresification curves showed that the optimum firing temperatures were in the range between 950 and 1050 ° C. The evaluation of the properties showed that the ceramic material can be studied its use in solid brick and ceramic materials with structural function of filling. The results of solubility showed that the addition of ion offers no risk to the environment
Resumo:
The industry, over the years, has been working to improve the efficiency of diesel engines. More recently, it was observed the need to reduce pollutant emissions to conform to the stringent environmental regulations. This has attached a great interest to develop researches in order to replace the petroleum-based fuels by several types of less polluting fuels, such as blends of diesel oil with vegetable oil esters and diesel fuel with vegetable oils and alcohol, emulsions, and also microemulsions. The main objective of this work was the development of microemulsion systems using nonionic surfactants that belong to the Nonylphenols ethoxylated group and Lauric ethoxylated alcohol group, ethanol/diesel blends, and diesel/biodiesel blends for use in diesel engines. First, in order to select the microemulsion systems, ternary phase diagrams of the used blends were obtained. The systems were composed by: nonionic surfactants, water as polar phase, and diesel fuel or diesel/biodiesel blends as apolar phase. The microemulsion systems and blends, which represent the studied fuels, were characterized by density, viscosity, cetane number and flash point. It was also evaluated the effect of temperature in the stability of microemulsion systems, the performance of the engine, and the emissions of carbon monoxide, nitrogen oxides, unburned hydrocarbons, and smoke for all studied blends. Tests of specific fuel consumption as a function of engine power were accomplished in a cycle diesel engine on a dynamometer bench and the emissions were evaluated using a GreenLine 8000 analyzer. The obtained results showed a slight increase in fuel consumption when microemulsion systems and diesel/biodiesel blends were burned, but it was observed a reduction in the emission of nitrogen oxides, unburned hydrocarbons, smoke index and f sulfur oxides
Resumo:
The principal zeitgeber for most of species is the light-dark photocycle (LD), though other environment factors as food availability, temperature and social cues may act. Daily adjustment of the circadian pacemaker may result from integration of environmental photic and non-photic cues with homeostatic cues. Characterization of non-photic effects on circadian timing system in diurnal mammals is scarce in relation to nocturnal, especially for ecologically significant cues. Thus, we analyzed the effect of conspecific vocalizations and darkness on circadian activity rhythm (CAR) in the diurnal primate Callithirx jacchus. With this objective 7 male adults were isolated in a room with controlled illumination, temperature (26,8 ± 0,2°C) and humidity (81,6 ± 3,6%), and partial acoustic isolation. Initially they were under LD 12:12 (~300:2 lux), and subsequently under constant illumination (~2 lux). Two pulses of conspecific vocalizations were applied in total darkness, separated by 22 days, at 7:30 h (external time) during 1 h. They induced phase delays at circadian times (CTs) 1 and 10 and predominantly phase advances at CTs 9 and 15. After that, two dark pulses were applied, separated by 14 days, during 1 h at 7:30 h (external time). These pulses induced phase delays at CTs 2, 3 and 18, predominantly phase advances at CTs 8, 10 and 19, and no change at CT 14. However, marmosets CAR showed oscillations in endogenous period and active phase duration influenced by vocalizations from animals outside the experimental room, which interfered on the phase responses to pulses. Furthermore, social masking and relative coordination with colony were observed. Therefore, phase responses obtained in this work cannot be attributed only to pulses. Afterwards, pulses of conspecific vocalizations were applied in total darkness at 19:00 h (external time), during 1 h for 5 consecutive days, and after 21 days, for 30 consecutive days, on attempt to synchronize the CAR. No animal was synchronized by these daily pulses, although oscillations in endogenous period were observed for all. This result may be due to habituation. Other possibility is the absence of social significance of the vocalizations for the animals due to random reproduction, since each vocalization has a function that could be lost by a mixture of sounds. In conclusion, conspecific vocalizations induce social masking and relative coordination in marmosets CAR, acting as weak zeitgeber
Resumo:
In rodents, the suprachiasmatic nucleus (SCN) and the intergeniculate leaflet (IGL) are the main components of the circadian system. The SCN is considerate the site of an endogenous biological clock because can to generate rhythm and to synchronize to the environmental cues (zeitgebers) and IGL has been related as one of the main areas that modulate the action of SCN. Both receive projections of ganglion cells of retina and this projection to SCN is called retinohypothalamic tract (RHT). Moreover, the IGL is connected with SCN through of geniculohypothalamic tract (GHT). In primates (include humans) was not still demonstrated the presence of a homologous structure to the IGL. It is believed that the pregeniculate nucleus (PGN) can be the answer, but nothing it was still proven. Trying to answer that question, the objective of our study is to do a comparative analysis among PGN and IGL through of techniques immunohystochemicals, neural tracers and FOS expression after dark pulses. For this, we used as experimental model a primate of the new world, the common marmoset (Callithrix jacchus). Ours results may contribute to the elucidation of this lacuna in the circadian system once that the IGL is responsible for the transmission of nonphotic information to SCN and participate in the integration between photic and nonphotic stimulus to adjust the function of the SCN. In this way to find a same structure in primates represent an important achieve in the understanding of the biological rhythms in those animals