19 resultados para Correção de erro
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This study developed software rotines, in a system made basically from a processor board producer of signs and supervisory, wich main function was correcting the information measured by a turbine gas meter. This correction is based on the use of an intelligent algorithm formed by an artificial neural net. The rotines were implemented in the habitat of the supervisory as well as in the habitat of the DSP and have three main itens: processing, communication and supervision
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
Reactive oxygen species (ROS) are produced by aerobic metabolism and react with biomolecules, such as lipids, proteins and DNA. In high concentration, they lead to oxidative stress. Among ROS, singlet oxygen (1O2) is one of the main ROS involved in oxidative stress and is one of the most reactive forms of molecular oxygen. The exposure of some dyes, such as methylene blue (MB) to light (MB+VL), is able to generate 1O2 and it is the principle involved in photodynamic therapy (PDT). 1O2 e other ROS have caused toxic and carcinogenic effects and have been associated with ageing, neurodegenerative diseases and cancer. Oxidative DNA damage is mainly repaired by base excision repair (BER) pathway. However, recent studies have observed the involvement of nucleotide excision repair (NER) factors in the repair of this type of injury. One of these factors is the Xeroderma Pigmentosum Complementation Group A (XPA) protein, which acts with other proteins in DNA damage recognition and in the recruitment of other repair factors. Moreover, oxidative agents such as 1O2 can induce gene expression. In this context, this study aimed at evaluating the response of XPA-deficient cells after treatment with photosensitized MB. For this purpose, we analyzed the cell viability and occurrence of oxidative DNA damage in cells lines proficient and deficient in XPA after treatment with MB+VL, and evaluated the expression of this enzyme in proficient and complemented cells. Our results indicate an increased resistance to treatment of complemented cells and a higher level of oxidative damage in the deficient cell lines. Furthermore, the treatment was able to modulate the XPA expression up to 24 hours later. These results indicate a direct evidence for the involvement of NER enzymes in the repair of oxidative damage. Besides, a better understanding of the effects of PDT on the induction of gene expression could be provided
Resumo:
The materials engineering includes processes and products involving several areas of engineering, allowing them to prepare materials that fulfill the needs of various new products. In this case, this work aims to study a system composed of cement paste and geopolymers, which can contribute to solving an engineering problem that directly involves the exploitation of oil wells subject to loss of circulation. To correct it, has been already proposed the use of granular materials, fibers, reducing the drilling fluid or cement paste density and even surface and downhole mixed systems. In this work, we proposed the development of a slurry mixed system, the first was a cement-based slurry and the second a geopolymer-based slurry. The cement-based slurry was formulated with low density and extenders, 12.0 ppg (1.438 g/cm ³), showing great thixotropic characteristics. It was added nano silica at concentrations of 0.5, 1.0 and 1.5 gps (66.88, 133.76 and 200.64 L/m3) and CaCl2 at concentrations of 0.5, 1, 0 and 1.5%. The second system is a geopolymer-based paste formulated from molar ratios of 3.5 (nSiO2/nAl2O3), 0.27 (nK2O/nSiO2), 1.07 (nK2O/nAl2O3) and 13.99 (nH2O/nK2O). Finally, we performed a mixture of these two systems, for their application for correction of circulation lost. To characterize the raw materials, XRD, XRF, FTIR analysis and titration were performed. The both systems were characterized in tests based on API RP10B. Compressive strength tests were conducted after curing for 24 hours, 7 and 28 days at 58 °C on the cement-based system and the geopolymer-based system. From the mixtures have been performed mixability tests and micro structural characterizations (XRD, SEM and TG). The results showed that the nano silica, when combined with CaCl2 modified the rheological properties of the cement slurry and from the concentration of 1.5 gpc (200.64 L / m³) it was possible to obtain stable systems. The system mixture caused a change in the microstructure of the material by favoring the rate of geopolymer formation to hinder the C3S phase hydration, thus, the production of CSH phases and Portlandite were harmed. Through the mixability tests it can be concluded that the system, due to reduced setting time of the mixture, can be applied to plug lost circulation zones when mixed downhole
Resumo:
Chemical admixtures, when properly selected and quantified, play an important role in obtaining adequate slurry systems for quality primary cementing operations. They assure the proper operation of a well and reduce costs attributed to corrective cementing jobs. Controlling the amount lost by filtering through the slurry to permeable areas is one of the most important requirements in an operation, commonly controlled by chemical admixtures, such as carboxymethylcellulose (CMC). However, problems related to temperature, salttolerance and the secundary retarding effect are commonly reported in the literature. According to the scenario described above, the use of an aqueous dispersion of non-ionic poliurethane was proposed to control the filter loss, given its low ionic interaction with the free ions present in the slurries in humid state. Therefore, this study aims at assessing the efficiency of poliurethane to reduce filter loss in different temperature and pressure conditions as well as the synergistic effect with other admixtures. The temperatures and pressures used in laboratory tests simulate the same conditions of oil wells with depths of 500 to 1200 m. The poliurethane showed resistance to thermal degradation and stability in the presence of salts. With the increase in the concentration of the polymer there was a considerable decrease in the volume lost by filtration, and this has been effective even with the increase in temperature
Resumo:
This work presents a set of intelligent algorithms with the purpose of correcting calibration errors in sensors and reducting the periodicity of their calibrations. Such algorithms were designed using Artificial Neural Networks due to its great capacity of learning, adaptation and function approximation. Two approaches willbe shown, the firstone uses Multilayer Perceptron Networks to approximate the many shapes of the calibration curve of a sensor which discalibrates in different time points. This approach requires the knowledge of the sensor s functioning time, but this information is not always available. To overcome this need, another approach using Recurrent Neural Networks was proposed. The Recurrent Neural Networks have a great capacity of learning the dynamics of a system to which it was trained, so they can learn the dynamics of a sensor s discalibration. Knowingthe sensor s functioning time or its discalibration dynamics, it is possible to determine how much a sensor is discalibrated and correct its measured value, providing then, a more exact measurement. The algorithms proposed in this work can be implemented in a Foundation Fieldbus industrial network environment, which has a good capacity of device programming through its function blocks, making it possible to have them applied to the measurement process
Resumo:
This work deals with the development of an experimental study on a power supply of high frequency that provides the toch plasmica to be implemented in PLASPETRO project, which consists of two static converters developed by using Insulated Gate Bipolar Transistor (IGBT). The drivers used to control these keys are triggered by Digital Signal Processor (DSP) through optical fibers to reduce problems with electromagnetic interference (EMI). The first stage consists of a pre-regulator in the form of an AC to DC converter with three-phase boost power factor correction which is the main theme of this work, while the second is the source of high frequency itself. A series-resonant inverter consists of four (4) cell inverters operating in a frequency around 115 kHz each one in soft switching mode, alternating itself to supply the load (plasma torch) an alternating current with a frequency of 450 kHz. The first stage has the function of providing the series-resonant inverter a DC voltage, with the value controlled from the power supply provided by the electrical system of the utility, and correct the power factor of the system as a whole. This level of DC bus voltage at the output of the first stage will be used to control the power transferred by the inverter to the load, and it may vary from 550 VDC to a maximum of 800 VDC. To control the voltage level of DC bus driver used a proportional integral (PI) controller and to achieve the unity power factor it was used two other proportional integral currents controllers. Computational simulations were performed to assist in sizing and forecasting performance. All the control and communications needed to stage supervisory were implemented on a DSP
Resumo:
The use of Field Programmable Gate Array (FPGA) for development of digital control strategies for power electronics applications has aroused a growing interest of many researchers. This interest is due to the great advantages offered by FPGA, which include: lower design effort, high performance and highly flexible prototyping. This work proposes the development and implementation of an unified one-cycle controller for boost CFP rectifier based on FPGA. This controller can be applied to a total of twelve converters, six inverters and six rectifiers defined by four single phase VSI topologies and three voltage modulation types. The topologies considered in this work are: full-bridge, interleaved full-bridge, half-bridge and interleaved half-bridge. While modulations are classified in bipolar voltage modulation (BVM), unipolar voltage modulation (UVM) and clamped voltage modulation (CVM). The proposed project is developed and prototyped using tools Matlab/Simulink® together with the DSP Builder library provided by Altera®. The proposed controller was validated with simulation and experimental results
Resumo:
Cette recherche analyse les pratiques de correction de textes de l'enseignement du 1, 2 et 3 ème niveau. Nous avons parti de discutions avec les professeurs de Langue Portugaise des troisièmes annèes d‟une école appartenant au enseignement publique, située à la ville d‟Assu RN. L'étude a les postulats théoriques de Cruz (2007), Dellagnelo (1998), Oliveira (2005), Pécora (1999), Ruiz (2001), Serafini (1989), et d‟autres. La méthodologie est de nature qualitative et d‟interpretation, dont le matériel a été constitué à partir des rapports des enseignants professionnels, ainsi que les 92 textes recueillies entre juillet et août 2008. Les dés montrent que la correction se configure comme un travail pratique, qui vise à aider les étudiants à améliorer leur écriture. Les professeurs font la correction d‟une forme mélangée, c'est-à-dire dans le texte apparaissent les corrections orthographiques, lexicales, etc., mais la prédominance de la correction est par rapport aux idées, pour le contenu du texte. Dans ce sens, les professeurs se rendent compte du valeur et de la hiérarchisation des idées discutées par les élèves, les reconnaître comme l'organisation de la sémantique et la séquence du texte. Tous les autres aspects (structurelles, grammaticaux) sont importants, cependant, en général, dans la pratique de correction des études, l‟exposition les idées occupent une place importante. Les marques de correction apparaissent sous la forme de petits billets, qui apprécie également toutes les étapes de l'écriture de texte
Resumo:
Through a careful examination of the relationship between Zoroastrianism and the Western tradition, and a detailed and critical reading of the writings of Nietzsche, this work aims at showing to what extent the character Zarathustra , his discourses and poetical-philosophical thoughts, and related passages from many distinct Nietzschean works, directly or undirectly reflect a philosophy that harvests contributions from the Zoroastrian tradition or its headways (in the Judeo-Greco-Christian tradition, and furthermore in the whole Western philosophical tradition). Supplied with this provisions, and with the interpretation cast upon them, Nietzschean philosophy questions the entire Western tradition of thought, and proposes its replacement by a new attitude towards life. This work also intends to show the way the Nietzschean Zarathustra was built up, in the writings of the German philosopher, together with the idea of making, out of the namesake character of the ancient Iranian prophet (Zarathushtra or Zoroaster, the founder of Zoroastrianism), the herald of that important text that intended to bring the German language to its highest perfection , clumping together, and leading to a prophetic-poetic climax consonant with the meaning of the Earth , Nietzsche s key ideas about the rectification of the most fatal of errors and about the death of God . An elaborate investigation has been pursued after the reasons and manners of the building up of Nietzsche s Zarathustra mirroring its Iranian namesake (sections 1.1 to 1.6), and a survey of the works of Nietzsche has suggested unquestionable relations with the Zoroastrian tradition, mostly through the Jewish, Greek or Christian repercussions of this tradition. These relations have been put in context, in many framings (sections 2.1 to 2.3.2), in the ambit of the most fatal of errors - the - creation of morals in the very occasion of its transposition to metaphysics (Ecce Homo, Why I am a destiny , 3). Through an evaluation of the possible circumstances and repercussions of the death of God , the relations between Nietzsche s writings and Zoroastrian tradition have been investigated (sections 3.1 to 3.7), allowing the understanding of this event as an essential component, and tragic outcome, of the rectification of the most fatal of errors
Resumo:
In this work, we study the survival cure rate model proposed by Yakovlev et al. (1993), based on a competing risks structure concurring to cause the event of interest, and the approach proposed by Chen et al. (1999), where covariates are introduced to model the risk amount. We focus the measurement error covariates topics, considering the use of corrected score method in order to obtain consistent estimators. A simulation study is done to evaluate the behavior of the estimators obtained by this method for finite samples. The simulation aims to identify not only the impact on the regression coefficients of the covariates measured with error (Mizoi et al. 2007) but also on the coefficients of covariates measured without error. We also verify the adequacy of the piecewise exponential distribution to the cure rate model with measurement error. At the end, model applications involving real data are made
Resumo:
In this work calibration models were constructed to determine the content of total lipids and moisture in powdered milk samples. For this, used the near-infrared spectroscopy by diffuse reflectance, combined with multivariate calibration. Initially, the spectral data were submitted to correction of multiplicative light scattering (MSC) and Savitzsky-Golay smoothing. Then, the samples were divided into subgroups by application of hierarchical clustering analysis of the classes (HCA) and Ward Linkage criterion. Thus, it became possible to build regression models by partial least squares (PLS) that allowed the calibration and prediction of the content total lipid and moisture, based on the values obtained by the reference methods of Soxhlet and 105 ° C, respectively . Therefore, conclude that the NIR had a good performance for the quantification of samples of powdered milk, mainly by minimizing the analysis time, not destruction of the samples and not waste. Prediction models for determination of total lipids correlated (R) of 0.9955, RMSEP of 0.8952, therefore the average error between the Soxhlet and NIR was ± 0.70%, while the model prediction to content moisture correlated (R) of 0.9184, RMSEP, 0.3778 and error of ± 0.76%
Resumo:
This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy
Resumo:
Student’s mistakes as viewed in a didactic and pedagogical perspective are a phenomenon inevitably observed in any context in which formal teaching-andlearning processes are taking place. Researchers have shown that such mistakes are viewed most of the times as undesirable and often as a consequence of lack of attention or poor commitment on the part of the student and rarely considered didactically useful. The object of our reflections in this work is exactly those mistakes, which are born in the entrails of the teaching-and-learning processes. It is our understanding that a mistake constitutes a tool which mediates knowledge and may therefore become a strong ally of the instructor’s actions in her/his teaching tasks and thus should be taken into the teacher’s best consideration. Understanding a mistake as so, we postulate that the teacher must face it as a possibility to be exploited rather than as a negative occurrence. Such an attitude on the part of the teacher would undoubtedly render profitable didactic situations. To deepen the understanding of our aim, we took a case study on the perception of senior college students in the program of Mathematics at UFRN in the year 2009, 2nd term. The reason of this choice is the fact that Mathematics is the field presenting traditionally the poorest records in terms of school grades. In this work we put forth data associated to ENEM1 , to the UFRN Vestibular2 and the undergraduate courses on Mathematics. The theoretical matrixes supporting our reflections in this thesis follow the ideas proposed by Castorina (1988); Davis e Espósito (1990); Aquino (1997); Luckesi (2006); Cury (1994; 2008); Pinto (2000); Torre (2007). To carry out the study, we applied a semi-structured questionnaire containing 14 questions, out of which 10 were open questions. The questions were methodologically based on the Thematic Analysis – One of the techniques for Content Analysis schemed by Bardin (1977) – and it was also used the computer program Modalisa 6.0 (A software designed by faculties the University of Paris VIII). The results indicate that most of the teachers training instructors in their pedagogical practice view the mistakes made by their students only as a guide for grading and, in this procedure, the student is frequently labeled as guilty. Conclusive analyses, therefore, signal to the necessity of orienting the teachers training instructors in the sense of building a new theoretical contemplation of the students’ mistakes and their pedagogical potentialities and so making those professionals perceive the importance of such mistakes, since they reveal gaps in the process of learning and provide valuable avenues for the teaching procedures.