26 resultados para Estimação da curva de juros com cupom zero
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The present paper has aimed the analysis of a real instrument which offers great impact in the ICMS revenue: The Fiscal Voucher Emitting Equipment (ECF). In this sense, the effects of the commercial automation process in Rio Grande do Norte s ICMS revenue between 2000 and 2006 were investigated. Based on this goal, the methodology adopted was characterized as a study of quantitative, exploratory-qualitative nature, through the collecting of secondary data, provided by the State Taxation Bureau (SET). In the absence of a statistic model in the existing literature about the approached theme, we decided for the elaboration of a suitable model, with tables and graphics. As a way to observe the effects of these programs on the revenue, the comparison between the ECF users and non users, in the same period, has proved to be of great importance. We reached the conclusion that even though the growth rates amongst the activities that use the ECF had ascended in tributary revenue in the related years, from 2004 on, with the introduction of TEF, this participation presented a higher growth, which leads us to suppose that the use of this recent instrument provides a significant impact in the State effective revenue. We stand out that the collected amounts could have been even higher, if the level of adhesion to the instrument had not been so low, mainly amongst the minor entrepreneurs, which may mean a rooted defraudation in the system. In short, through the set of data obtained, it is possible to conclude that the ECF and the recent TEF have significantly influenced the ICMS revenue in the entire State all over the period that was analyzed
Resumo:
With the new discoveries of oil and gas, the exploration of fields in various geological basins, imports of other oils and the development of alternative fuels, more and more research labs have evaluated and characterized new types of petroleum and derivatives. Therefore the investment in new techniques and equipment in the samples analysis to determine their physical and chemical properties, their composition, possible contaminants, especification of products, among others, have multiplied in last years, so development of techniques for rapid and efficient characterization is extremely important for a better economic recovery of oil. Based on this context, this work has two main objectives. The first one is to characterize the oil by thermogravimetry coupled with mass spectrometry (TG-MS), and correlate these results with from other types of characterizations data previously informed. The second is to use the technique to develop a methodology to obtain the curve of evaluation of hydrogen sulfide gas in oil. Thus, four samples were analyzed by TG-MS, and X-ray fluorescence spectrometry (XRF). TG results can be used to indicate the nature of oil, its tendency in coke formation, temperatures of distillation and cracking, and other features. It was observed in MS evaluations the behavior of oil main compounds with temperature, the points where the volatilized certain fractions and the evaluation gas analysis of sulfide hydrogen that is compared with the evaluation curve obtained by Petrobras with another methodology
Resumo:
The oscillations presents in control loops can cause damages in petrochemical industry. Canceling, or even preventing such oscillations, would save up to large amount of dollars. Studies have identified that one of the causes of these oscillations are the nonlinearities present on industrial process actuators. This study has the objective to develop a methodology for removal of the harmful effects of nonlinearities. Will be proposed an parameter estimation method to Hammerstein model, whose nonlinearity is represented by dead-zone or backlash. The estimated parameters will be used to construct inverse models of compensation. A simulated level system was used as a test platform. The valve that controls inflow has a nonlinearity. Results and describing function analysis show an improvement on system response
Resumo:
Petroleum evaluation is analyze it using different methodologies, following international standards to know their chemical and physicochemical properties, contaminant levels, composition and especially their ability to generate derivatives. Many of these analyzes consuming a lot of time, large amount of samples , supplies and need an organized transportation logistics, schedule and professionals involved. Looking for alternatives that optimize the evaluation and enable the use of new technologies, seven samples of different centrifuged Brazilian oils previously characterized by Petrobras were analyzed by thermogravimetry in 25-900° C range using heating rates of 05, 10 and 20ºC per minute. With experimental data obtained, characterizations correlations were performed and provided: generation of true boiling point curves (TBP) simulated; comparing fractions generated with appropriate cut standard in temperature ranges; an approach to obtain Watson characterization factor; and compare micro carbon residue formed. The results showed a good chance of reproducing simulated TBP curve from thermogravimetry taking into account the composition, density and other oil properties. Proposed correlations for experimental characterization factor and carbon residue followed Petrobras characterizations, showing that thermogravimetry can be used as a tool on oil evaluation, because your quick analysis, accuracy, and requires a minimum number of samples and consumables
Resumo:
The ethanol is the most overused psychoactive drug over the world; this fact makes it one of the main substances required in toxicological exams nowadays. The development of an analytical method, adaptation or implementation of a method known, involves a process of validation that estimates its efficiency in the laboratory routine and credibility of the method. The stability is defined as the ability of the sample of material to keep the initial value of a quantitative measure for a defined period within specific limits when stored under defined conditions. This study aimed to evaluate the method of Gas chromatography and study the stability of ethanol in blood samples, considering the variables time and temperature of storage, and the presence of preservative and, with that check if the conditions of conservation and storage used in this study maintain the quality of the sample and preserve the originally amount of analyte present. Blood samples were collected from 10 volunteers to evaluate the method and to study the stability of ethanol. For the evaluation of the method, part of the samples was added to known concentrations of ethanol. In the study of stability, the other side of the pool of blood was placed in two containers: one containing the preservative sodium fluoride 1% and the anticoagulant heparin and the other only heparin, was added ethanol at a concentration of 0.6 g/L, fractionated in two bottles, one being stored at 4ºC (refrigerator) and another at -20ºC (freezer), the tests were performed on the same day (time zero) and after 1, 3, 7, 14, 30 and 60 days of storage. The assessment found the difference in results during storage in relation to time zero. It used the technique of headspace associated with gas chromatography with the FID and capillary column with stationary phase of polyethylene. The best analysis of chromatographic conditions were: temperature of 50ºC (column), 150ºC (jet) and 250ºC (detector), with retention time for ethanol from 9.107 ± 0.026 and the tercbutanol (internal standard) of 8.170 ± 0.081 minutes, the ethanol being separated properly from acetaldehyde, acetone, methanol and 2-propanol, which are potential interfering in the determination of ethanol. The technique showed linearity in the concentration range of 0.01 and 3.2 g/L (0.8051 x + y = 0.6196; r2 = 0.999). The calibration curve showed the following equation of the line: y = x 0.7542 + 0.6545, with a linear correlation coefficient equal to 0.996. The average recovery was 100.2%, the coefficients of variation of accuracy and inter intra test showed values of up to 7.3%, the limit of detection and quantification was 0.01 g/L and showed coefficient of variation within the allowed. The analytical method evaluated in this study proved to be fast, efficient and practical, given the objective of this work satisfactorily. The study of stability has less than 20% difference in the response obtained under the conditions of storage and stipulated period, compared with the response obtained at time zero and at the significance level of 5%, no statistical difference in the concentration of ethanol was observed between analysis. The results reinforce the reliability of the method of gas chromatography and blood samples in search of ethanol, either in the toxicological, forensic, social or clinic
Resumo:
In current upbringing production, children are often conceived as rightful subjects, and concrete and singular people, marked by specificities that schools must respect, mainly their personal wholeness, their care and attention needs, as well as their abilities to learn and produce culture. In the educational practices frame, routine is considered to have a definitive roll in time, space and activities structuring, as with actions and relations of subjects involved. In that perspective, this research aims to analyze routines of zero to two years old children in the upbringing context, relating to their childish specificities. Anchored in the qualitative approach, a Case Study was developed, according the procedures of daily routine observation and semi-structured interviews with six nursery teachers of CMEI Centro Municipal de Educação Infantil, Natal-RN, the research field. The data analysis was based in Speech Analysis principles. The teachers utterances regarding routine and it s roll in the frame revealed significances related to control/regulation of actions theirs and students aiming to streamline tasks; learning relative to routine itself, time and school practices. Thus, prospects of discipline and exercise of power of teachers over students surges, reducing their possibilities to participate. These conceptions reflect the daily routine of the kids and their teachers. By analyzing the methods of routine operation in the time/space/activities frame of CMEI, it was possible to perceive its homogenization of actions and rhythms, not only of the group s children, but the whole institution, which creates, many times, a controlling character that contains/prevents children s initiative. However, it was also possible to observe that in routine recesses, when it s relaxed, and other spaces, times and actions are provided, kids have the opportunity to experience and create different ways of action and relation with time, materials, other kids and teachers, being, as such, respected their specificities. We highlight the importance of reflections regarding routine in upbringing context, as to comprehend it s functions and the need for it s construction to take a multiple character that respects the plurality of situations and singularities of children as persons
Resumo:
This work describes the study and the implementation of the vector speed control for a three-phase Bearingless induction machine with divided winding of 4 poles and 1,1 kW using the neural rotor flux estimation. The vector speed control operates together with the radial positioning controllers and with the winding currents controllers of the stator phases. For the radial positioning, the forces controlled by the internal machine magnetic fields are used. For the radial forces optimization , a special rotor winding with independent circuits which allows a low rotational torque influence was used. The neural flux estimation applied to the vector speed controls has the objective of compensating the parameter dependences of the conventional estimators in relation to the parameter machine s variations due to the temperature increases or due to the rotor magnetic saturation. The implemented control system allows a direct comparison between the respective responses of the speed and radial positioning controllers to the machine oriented by the neural rotor flux estimator in relation to the conventional flux estimator. All the system control is executed by a program developed in the ANSI C language. The DSP resources used by the system are: the Analog/Digital channels converters, the PWM outputs and the parallel and RS-232 serial interfaces, which are responsible, respectively, by the DSP programming and the data capture through the supervisory system
Resumo:
This work intends to analyze the behavior of the gas flow of plunger lift wells producing to well testing separators in offshore production platforms to aim a technical procedure to estimate the gas flow during the slug production period. The motivation for this work appeared from the expectation of some wells equipped with plunger lift method by PETROBRAS in Ubarana sea field located at Rio Grande do Norte State coast where the produced fluids measurement is made in well testing separators at the platform. The oil artificial lift method called plunger lift is used when the available energy of the reservoir is not high enough to overcome all the necessary load losses to lift the oil from the bottom of the well to the surface continuously. This method consists, basically, in one free piston acting as a mechanical interface between the formation gas and the produced liquids, greatly increasing the well s lifting efficiency. A pneumatic control valve is mounted at the flow line to control the cycles. When this valve opens, the plunger starts to move from the bottom to the surface of the well lifting all the oil and gas that are above it until to reach the well test separator where the fluids are measured. The well test separator is used to measure all the volumes produced by the well during a certain period of time called production test. In most cases, the separators are designed to measure stabilized flow, in other words, reasonably constant flow by the use of level and pressure electronic controllers (PLC) and by assumption of a steady pressure inside the separator. With plunger lift wells the liquid and gas flow at the surface are cyclical and unstable what causes the appearance of slugs inside the separator, mainly in the gas phase, because introduce significant errors in the measurement system (e.g.: overrange error). The flow gas analysis proposed in this work is based on two mathematical models used together: i) a plunger lift well model proposed by Baruzzi [1] with later modifications made by Bolonhini [2] to built a plunger lift simulator; ii) a two-phase separator model (gas + liquid) based from a three-phase separator model (gas + oil + water) proposed by Nunes [3]. Based on the models above and with field data collected from the well test separator of PUB-02 platform (Ubarana sea field) it was possible to demonstrate that the output gas flow of the separator can be estimate, with a reasonable precision, from the control signal of the Pressure Control Valve (PCV). Several models of the System Identification Toolbox from MATLAB® were analyzed to evaluate which one better fit to the data collected from the field. For validation of the models, it was used the AIC criterion, as well as a variant of the cross validation criterion. The ARX model performance was the best one to fit to the data and, this way, we decided to evaluate a recursive algorithm (RARX) also with real time data. The results were quite promising that indicating the viability to estimate the output gas flow rate from a plunger lift well producing to a well test separator, with the built-in information of the control signal to the PCV
Resumo:
Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed
Resumo:
This work describes the study and the implementation of the speed control for a three-phase induction motor of 1,1 kW and 4 poles using the neural rotor flux estimation. The vector speed control operates together with the winding currents controller of the stator phasis. The neural flux estimation applied to the vector speed controls has the objective of compensating the parameter dependences of the conventional estimators in relation to the parameter machine s variations due to the temperature increases or due to the rotor magnetic saturation. The implemented control system allows a direct comparison between the respective responses of the speed controls to the machine oriented by the neural rotor flux estimator in relation to the conventional flux estimator. All the system control is executed by a program developed in the ANSI C language. The main DSP recources used by the system are, respectively, the Analog/Digital channels converters, the PWM outputs and the parallel and RS-232 serial interfaces, which are responsible, respectively, by the DSP programming and the data capture through the supervisory system
Resumo:
This work aims to predict the total maximum demand of a transformer that will be used in power systems to attend a Multiple Unit Consumption (MUC) in design. In 1987, COSERN noted that calculation of maximum total demand for a building should be different from that which defines the scaling of the input protection extension in order to not overestimate the power of the transformer. Since then there have been many changes, both in consumption habits of the population, as in electrical appliances, so that this work will endeavor to improve the estimation of peak demand. For the survey, data were collected for identification and electrical projects in different MUCs located in Natal. In some of them, measurements were made of demand for 7 consecutive days and adjusted for an integration interval of 30 minutes. The estimation of the maximum demand was made through mathematical models that calculate the desired response from a set of information previously known of MUCs. The models tested were simple linear regressions, multiple linear regressions and artificial neural networks. The various calculated results over the study were compared, and ultimately, the best answer found was put into comparison with the previously proposed model
Resumo:
This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm
Resumo:
A modelagem de processos industriais tem auxiliado na produção e minimização de custos, permitindo a previsão dos comportamentos futuros do sistema, supervisão de processos e projeto de controladores. Ao observar os benefícios proporcionados pela modelagem, objetiva-se primeiramente, nesta dissertação, apresentar uma metodologia de identificação de modelos não-lineares com estrutura NARX, a partir da implementação de algoritmos combinados de detecção de estrutura e estimação de parâmetros. Inicialmente, será ressaltada a importância da identificação de sistemas na otimização de processos industriais, especificamente a escolha do modelo para representar adequadamente as dinâmicas do sistema. Em seguida, será apresentada uma breve revisão das etapas que compõem a identificação de sistemas. Na sequência, serão apresentados os métodos fundamentais para detecção de estrutura (Modificado Gram- Schmidt) e estimação de parâmetros (Método dos Mínimos Quadrados e Método dos Mínimos Quadrados Estendido) de modelos. No trabalho será também realizada, através dos algoritmos implementados, a identificação de dois processos industriais distintos representados por uma planta de nível didática, que possibilita o controle de nível e vazão, e uma planta de processamento primário de petróleo simulada, que tem como objetivo representar um tratamento primário do petróleo que ocorre em plataformas petrolíferas. A dissertação é finalizada com uma avaliação dos desempenhos dos modelos obtidos, quando comparados com o sistema. A partir desta avaliação, será possível observar se os modelos identificados são capazes de representar as características estáticas e dinâmicas dos sistemas apresentados nesta dissertação
Resumo:
The flow assurance has become one of the topics of greatest interest in the oil industry, mainly due to production and transportation of oil in regions with extreme temperature and pressure. In these operations the wax deposition is a commonly problem in flow of paraffinic oils, causing the rising costs of the process, due to increased energy cost of pumping, decreased production, increased pressure on the line and risk of blockage of the pipeline. In order to describe the behavior of the wax deposition phenomena in turbulent flow of paraffinic oils, under different operations conditions, in this work we developed a simulator with easy interface. For that we divided de work in four steps: (i) properties estimation (physical, thermals, of transport and thermodynamics) of n-alkanes and paraffinic mixtures by using correlations; (ii) obtainment of the solubility curve and determination the wax appearance temperature, by calculating the solid-liquid equilibrium of parafinnic systems; (iii) modelling wax deposition process, comprising momentum, mass and heat transfer; (iv) development of graphic interface in MATLAB® environment for to allow the understanding of simulation in different flow conditions as well as understand the matter of the variables (inlet temperature, external temperature, wax appearance temperature, oil composition, and time) on the behavior of the deposition process. The results showed that the simulator developed, called DepoSim, is able to calculate the profile of temperature, thickness of the deposit, and the amount of wax deposited in a simple and fast way, and also with consistent results and applicable to the operation
Resumo:
We study the critical behavior of the one-dimensional pair contact process (PCP), using the Monte Carlo method for several lattice sizes and three different updating: random, sequential and parallel. We also added a small modification to the model, called Monte Carlo com Ressucitamento" (MCR), which consists of resuscitating one particle when the order parameter goes to zero. This was done because it is difficult to accurately determine the critical point of the model, since the order parameter(particle pair density) rapidly goes to zero using the traditional approach. With the MCR, the order parameter becomes null in a softer way, allowing us to use finite-size scaling to determine the critical point and the critical exponents β, ν and z. Our results are consistent with the ones already found in literature for this model, showing that not only the process of resuscitating one particle does not change the critical behavior of the system, it also makes it easier to determine the critical point and critical exponents of the model. This extension to the Monte Carlo method has already been used in other contact process models, leading us to believe its usefulness to study several others non-equilibrium models