991 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::MATEMATICA APLICADA E ESTATÍSTICA
Resumo:
This work was performing effluent degradation studies by electrochemical treatment. The electrochemical oxidation (EO) hydroquinone (H2Q) was carried out in acid medium, using PbO2 electrode by galvanostatic electrolysis, applying current densities of 10 and 30 mA/cm2 . The concentration of H2Q was monitored by differential pulse voltammetry (DPV). The experimental results showed that the galvanostatic electrolysis process performance significantly depends on the applied current density, achieving removal efficiencies of 100% and 80 % and 10 applying 30 mA/cm2 , respectively. Furthermore, the electroanalytical technique was effective in H2Q be used as a detection method. In order to test the efficiency of PbO2 electrode, the electrochemical treatment was conducted in an actual effluent, leachate from a landfill. The liquid waste leachate (600ml effluent) was treated in a batch electrochemical cell, with or without addition of NaCl by applying 7 mA/cm2 . The efficiency of EO was assessed against the removal of thermo-tolerant coliforms, total organic carbon (TOC), total phosphorus and metals (copper, cobalt, chromium, iron and nickel). These results showed that efficient removal of coliforms was obtained (100%), and was further decrease the concentration of heavy metals by the cathode processes. However, results were not satisfactory TOC, achieving low total removal of dissolved organic load. Because it is considered an effluent complex were developed other tests with this effluent to monitor a larger number of decontamination parameters (Turbidity, Total Solids, Color, Conductivity, Total Organic Carbon (TOC) and metals (barium, chromium, lithium, manganese and Zinc), comparing the efficiency of this type of electrochemical treatment (EO or electrocoagulation) using a flow cell. In this assay was compared to electro streaming. In the case of the OE, Ti/IrO2-TaO5 was used as the anode, however, the electrocoagulation process, aluminum electrodes were used; applying current densities of 10, 20 and 30 mA/cm2 in the presence and absence of NaCl as an electrolyte. The results showed that EO using Ti/IrO2–TaO5 was anode as efficient when Cl- was present in the effluent. In contrast, the electrocoagulation flow reduces the dissolved organic matter in the effluent, under certain experimental conditions.
Resumo:
The variability / climate change has generated great concern worldwide, is one of the major issues as global warming, which can is affecting the availability of water resources in irrigated perimeters. In the semiarid region of Northeastern Brazil it is known that there is a predominance of drought, but it is not enough known about trends in climate series of joint water loss by evaporation and transpiration (evapotranspiration). Therefore, this study aimed to analyze whether there is increase and / or decrease evidence in the regime of reference evapotranspiration (ETo), for the monthly, annual and interdecadal scales in irrigated polo towns of Juazeiro, BA (9 ° 24'S, 40 ° 26'W and 375,5m) and Petrolina, PE (09 ° 09'S, 40 ° 22'W and 376m), which is the main analysis objective. The daily meteorological data were provided by EMBRAPA Semiárido for the period from 01.01.1976 to 31.12.2014, estimated the daily ETo using the standard method of Penman-Monteith (EToPM) parameterized by Smith (1991). Other methods of more simplified estimatives were calculated and compared to EToPM, as the ones following: Solar Radiation (EToRS), Linacre (EToL), Hargreaves and Samani (EToHS) and the method of Class A pan (EToTCA). The main statistical analysis were non-parametric tests of homogeneity (Run), trend (Mann-kendall), magnitude of the trend (Sen) and early trend detection (Mann-Whitney). The statistical significance adopted was 5 and / or 1%. The Analysis of Variance - ANOVA was used to detect if there is a significant difference in mean interdecadal mean. For comparison between the methods of ETo, it were used the correlation test (r), the Student t test and Tukey levels of 5% significance. Finally, statistics Willmott et al. (1985) statistics was used to evaluate the concordance index and performance of simplified methods compared to the standard method. It obtained as main results that there was a decrease in the time series of EToPM in irrigated areas of Juazeiro, BA and Petrolina, PE, significant respectively at 1 and 5%, with an annual magnitude of -14.5 mm (Juazeiro) and -7.7 mm (Petrolina) and early trend in 1996. The methods which had better for better agreement with EToPM were EToRS with very good performance, in both locations, followed by the method of EToL with good performance (Juazeiro) and median (Petrolina). EToHS had the worst performance (bad) for both locations. It is suggested that this decrease of EToPM can be associated with the increase in irrigated agricultural areas and the construction of Sobradinho lake upstream of the perimeters.
Resumo:
The educational games can act as a complementary tool in the teaching and learning as play an important role in the interaction of the student with knowledge, and encourage interrelationship between students and motivate them by the pursuit of knowledge. This research is intended to analyze the evidence of the students of distance education applied to semesters 2011.2 and 2012.2 for the purpose of to catalog the mistakes presented by the students, the contents of stereochemistry that attended the Chemistry of Life discipline. From the presented mistekes, develop an educational game, "walking the stereochemistry", addressing that content. The choice of stereochemistry content was due to the low number of found work in the literature, and for being one of organic chemistry content that generates learning difficulties, as it requires a mental visualization and manipulation of molecular structures, besides require observation and comparison ability by the students. The game was applied of Life Chemistry discipline of Nova Cruz Polo in semester 2013.2, with intention to verify the viability and applicability this tool for the development of motivation ability by the pursuit of knowledge by the students, as well as complement the didactical materials of the DE. Then, It was made available on the course page an opinion questionnaire to the participants of the game, as a way of to investigate the opinion their about the proposed strategy. To diagnose the contributions of the game on student learning, it was taken a comparative analysis of stereochemistry issues contained in didactic tests applied in 2013.2 semester students participating and not participating of the polo of Nova Cruz and was also compared with the tests applied at the poles of Extremoz, Currais Novos, Lajes and Caico. So the game can be considered an important resource to complement the teaching materials of distance education, because awoke the motivation for the search of knowledge and contributes to the learning of stereochemistry content.
Resumo:
Hexavalent chromium is a heavy metal present in various industrial effluents, and depending on its concentration may cause irreparable damage to the environment and to humans. Facing this surrounding context, this study aimed on the application of electrochemical methods to determine and remove the hexavalent chromium (Cr6+) in simulated wastewater. To determine was applied to cathodic stripping voltammetry (CSV) using ultra trace graphite electrodes ultra trace (work), Ag/AgCl (reference) and platinum (counter electrode), the samples were complexed with 1,5- diphenylcarbazide and then subjected to analysis. The removal of Cr6+ was applied electrocoagulation process (EC) using Fe and Al electrodes. The variables that constituted the factorial design 24, applied to optimizing the EC process, were: current density (5 and 10 mA.cm-2), temperature (25 and 60 ºC), concentration (50 and 100 ppm) and agitation rate (400 and 600 RPM). Through the preliminary test it was possible the adequacy of applying the CSV for determining of Cr6+, removed during the EC process. The Fe and Al electrodes as anodes sacrifice showed satisfactory results in the EC process, however Fe favored complete removal in 30 min, whereas with Al occurred at 240 min. In the application of factorial design 24 and analysis of Response Surface Methodology was possible to optimize the EC process for removal of Cr6+ in H2SO4 solution (0.5 mol.L-1), in which the temperature, with positive effect, was the variable that presented higher statistical significance compared with other variables and interactions, while in optimizing the EC process for removal of Cr6+ in NaCl solution (0.1 mol.L-1) the current density, with positive effect, and concentration, with a negative effect were the variables that had greater statistical significance with greater statistical significance compared with other variables and interactions. The utilization of electrolytes supports NaCl and Na2SO4 showed no significant differences, however NaCl resulted in rapid improvement in Cr6+ removal kinetics and increasing the NaCl concentration provided an increase in conductivity of the solution, resulting in lower energy consumption. The wear of the electrodes evaluated in all the process of EC showed that the Al in H2SO4 solution (0.5 mol.L-1), undergoes during the process of anodization CE, then the experimental mass loss is less than the theoretical mass loss, however, the Fe in the same medium showed a loss of mass greater experimental estimated theoretically. This fact is due to a spontaneous reaction of Fe with H2SO4, and when the reaction medium was the NaCl and Na2SO4 loss experimental mass approached the theoretical mass loss. Furthermore, it was observed the energy consumption of all processes involved in this study had a low operating cost, thus enabling the application of the EC process for treating industrial effluents. The results were satisfactory, it was achieved complete removal of Cr6+ in all processes used in this study.
Resumo:
Diesel fuel is one of leading petroleum products marketed in Brazil, and has its quality monitored by specialized laboratories linked to the National Agency of Petroleum, Natural Gas and Biofuels - ANP. The main trial evaluating physicochemical properties of diesel are listed in the resolutions ANP Nº 65 of December 9th, 2011 and Nº 45 of December 20th, 2012 that determine the specification limits for each parameter and methodologies of analysis that should be adopted. However the methods used although quite consolidated, require dedicated equipment with high cost of acquisition and maintenance, as well as technical expertise for completion of these trials. Studies for development of more rapid alternative methods and lower cost have been the focus of many researchers. In this same perspective, this work conducted an assessment of the applicability of existing specialized literature on mathematical equations and artificial neural networks (ANN) for the determination of parameters of specification diesel fuel. 162 samples of diesel with a maximum sulfur content of 50, 500 and 1800 ppm, which were analyzed in a specialized laboratory using ASTM methods recommended by the ANP, with a total of 810 trials were used for this study. Experimental results atmospheric distillation (ASTM D86), and density (ASTM D4052) of diesel samples were used as basic input variables to the equations evaluated. The RNAs were applied to predict the flash point, cetane number and sulfur content (S50, S500, S1800), in which were tested network architectures feed-forward backpropagation and generalized regression varying the parameters of the matrix input in order to determine the set of variables and the best type of network for the prediction of variables of interest. The results obtained by the equations and RNAs were compared with experimental results using the nonparametric Wilcoxon test and Student's t test, at a significance level of 5%, as well as the coefficient of determination and percentage error, an error which was obtained 27, 61% for the flash point using a specific equation. The cetane number was obtained by three equations, and both showed good correlation coefficients, especially equation based on aniline point, with the lowest error of 0,816%. ANNs for predicting the flash point and the index cetane showed quite superior results to those observed with the mathematical equations, respectively, with errors of 2,55% and 0,23%. Among the samples with different sulfur contents, the RNAs were better able to predict the S1800 with error of 1,557%. Generally, networks of the type feedforward proved superior to generalized regression.
Resumo:
Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.
Resumo:
Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.
Resumo:
Soft skills and teamwork practices were identi ed as the main de ciencies of recent graduates in computer courses. This issue led to a realization of a qualitative research aimed at investigating the challenges faced by professors of those courses in conducting, monitoring and assessing collaborative software development projects. Di erent challenges were reported by teachers, including di culties in the assessment of students both in the collective and individual levels. In this context, a quantitative research was conducted with the aim to map soft skill of students to a set of indicators that can be extracted from software repositories using data mining techniques. These indicators are aimed at measuring soft skills, such as teamwork, leadership, problem solving and the pace of communication. Then, a peer assessment approach was applied in a collaborative software development course of the software engineering major at the Federal University of Rio Grande do Norte (UFRN). This research presents a correlation study between the students' soft skills scores and indicators based on mining software repositories. This study contributes: (i) in the presentation of professors' perception of the di culties and opportunities for improving management and monitoring practices in collaborative software development projects; (ii) in investigating relationships between soft skills and activities performed by students using software repositories; (iii) in encouraging the development of soft skills and the use of software repositories among software engineering students; (iv) in contributing to the state of the art of three important areas of software engineering, namely software engineering education, educational data mining and human aspects of software engineering.
Resumo:
Soft skills and teamwork practices were identi ed as the main de ciencies of recent graduates in computer courses. This issue led to a realization of a qualitative research aimed at investigating the challenges faced by professors of those courses in conducting, monitoring and assessing collaborative software development projects. Di erent challenges were reported by teachers, including di culties in the assessment of students both in the collective and individual levels. In this context, a quantitative research was conducted with the aim to map soft skill of students to a set of indicators that can be extracted from software repositories using data mining techniques. These indicators are aimed at measuring soft skills, such as teamwork, leadership, problem solving and the pace of communication. Then, a peer assessment approach was applied in a collaborative software development course of the software engineering major at the Federal University of Rio Grande do Norte (UFRN). This research presents a correlation study between the students' soft skills scores and indicators based on mining software repositories. This study contributes: (i) in the presentation of professors' perception of the di culties and opportunities for improving management and monitoring practices in collaborative software development projects; (ii) in investigating relationships between soft skills and activities performed by students using software repositories; (iii) in encouraging the development of soft skills and the use of software repositories among software engineering students; (iv) in contributing to the state of the art of three important areas of software engineering, namely software engineering education, educational data mining and human aspects of software engineering.
Resumo:
Studies carried out in several countries have confirmed the students’ difficulty in explaining the causes of the seasons of the year, and most of times their learning takes place incorrectly. The seasons of the year have been generally treated in didactic books apart from people´s routine, based on the heliocentric system, what demands abstraction to understand the phenomenon. Before this difficulty, it is necessary to think about a teaching proposal which allows the students to realize the environmental characteristics and its changes over time, as well as the seasons themselves. Thus, our goal was to work from the perspective of the observer on the terrestrial surface, therefore using the topocentric system. For that, we constructed a didactic sequence, grounded in Ausubel´s meaningful learning theory (2003) and in Moreira´s critical meaningful learning theory (2010), which was applied to students in 9th grade of elementary school and in 2th grade of high school at Escola Estadual Jerônimo Arantes, in Uberlândia, Minas Gerais, owing to their previous knowledge and alternative conceptions, which were collected via interviews. Afterwards, to evaluate the applied methodology, we made new interviews, by which we realized improvement in learning in relation to the characteristics of the seasons based on Sun´s apparent path, which we attribute to reference the change of observation and the means to obtain data on the volume of rainfall and average temperature in the city throughout the year. On the other hand, there are points that were not highlighted in learning, such as the link between winter and rainy season and the causes of the seasons, points left to be discussed in future investigations.
Resumo:
Studies carried out in several countries have confirmed the students’ difficulty in explaining the causes of the seasons of the year, and most of times their learning takes place incorrectly. The seasons of the year have been generally treated in didactic books apart from people´s routine, based on the heliocentric system, what demands abstraction to understand the phenomenon. Before this difficulty, it is necessary to think about a teaching proposal which allows the students to realize the environmental characteristics and its changes over time, as well as the seasons themselves. Thus, our goal was to work from the perspective of the observer on the terrestrial surface, therefore using the topocentric system. For that, we constructed a didactic sequence, grounded in Ausubel´s meaningful learning theory (2003) and in Moreira´s critical meaningful learning theory (2010), which was applied to students in 9th grade of elementary school and in 2th grade of high school at Escola Estadual Jerônimo Arantes, in Uberlândia, Minas Gerais, owing to their previous knowledge and alternative conceptions, which were collected via interviews. Afterwards, to evaluate the applied methodology, we made new interviews, by which we realized improvement in learning in relation to the characteristics of the seasons based on Sun´s apparent path, which we attribute to reference the change of observation and the means to obtain data on the volume of rainfall and average temperature in the city throughout the year. On the other hand, there are points that were not highlighted in learning, such as the link between winter and rainy season and the causes of the seasons, points left to be discussed in future investigations.
Resumo:
This work presents a study about a the Baars-Franklin architecture, which defines a model of computational consciousness, and use it in a mobile robot navigation task. The insertion of mobile robots in dynamic environments carries a high complexity in navigation tasks, in order to deal with the constant environment changes, it is essential that the robot can adapt to this dynamism. The approach utilized in this work is to make the execution of these tasks closer to how human beings react to the same conditions by means of a model of computational consci-ousness. The LIDA architecture (Learning Intelligent Distribution Agent) is a cognitive system that seeks tomodel some of the human cognitive aspects, from low-level perceptions to decision making, as well as attention mechanism and episodic memory. In the present work, a computa-tional implementation of the LIDA architecture was evaluated by means of a case study, aiming to evaluate the capabilities of a cognitive approach to navigation of a mobile robot in dynamic and unknown environments, using experiments both with virtual environments (simulation) and a real robot in a realistic environment. This study concluded that it is possible to obtain benefits by using conscious cognitive models in mobile robot navigation tasks, presenting the positive and negative aspects of this approach.
Resumo:
The large number of opinions generated by online users made the former “word of mouth” find its way to virtual world. In addition to be numerous, many of the useful reviews are mixed with a large number of fraudulent, incomplete or duplicate reviews. However, how to find the features that influence on the number of votes received by an opinion and find useful reviews? The literature on opinion mining has several studies and techniques that are able to analyze of properties found in the text of reviews. This paper presents the application of a methodology for evaluation of usefulness of opinions with the aim of identifying which characteristics have more influence on the amount of votes: basic utility (e.g. ratings about the product and/or service, date of publication), textual (e.g.size of words, paragraphs) and semantics (e.g., the meaning of the words of the text). The evaluation was performed in a database extracted from TripAdvisor with opinionsabout hotels written in Portuguese. Results show that users give more attention to recent opinions with higher scores for value and location of the hotel and with lowest scores for sleep quality and service and cleanliness. Texts with positive opinions, small words, few adjectives and adverbs increase the chances of receiving more votes.
Resumo:
The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.
Resumo:
Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.