80 resultados para tempo do aluno
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
The incorporate of industrial automation in the medical are requires mechanisms to safety and efficient establishment of communication between biomedical devices. One solution to this problem is the MP-HA (Multicycles Protocol to Hospital Automation) that down a segmented network by beds coordinated by an element called Service Provider. The goal of this work is to model this Service Provider and to do performance analysis of the activities executed by in establishment and maintenance of hospital networks
Resumo:
Several mobile robots show non-linear behavior, mainly due friction phenomena between the mechanical parts of the robot or between the robot and the ground. Linear models are efficient in some cases, but it is necessary take the robot non-linearity in consideration when precise displacement and positioning are desired. In this work a parametric model identification procedure for a mobile robot with differential drive that considers the dead-zone in the robot actuators is proposed. The method consists in dividing the system into Hammerstein systems and then uses the key-term separation principle to present the input-output relations which shows the parameters from both linear and non-linear blocks. The parameters are then simultaneously estimated through a recursive least squares algorithm. The results shows that is possible to identify the dead-zone thresholds together with the linear parameters
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles
Resumo:
To obtain a process stability and a quality weld bead it is necessary an adequate parameters set: base current and time, pulse current and pulse time, because these influence the mode of metal transfer and the weld quality in the MIG-P, sometimes requiring special sources with synergistic modes with external control for this stability. This work aims to analyze and compare the effects of pulse parameters and droplet size in arc stability in MIG-P, four packets of pulse parameters were analysed: Ip = 160 A, tp = 5.7 ms; Ip = 300 A and tp = 2 ms, Ip = 350 A, tp = 1.2 ms and Ip = 350 A, tp = 0.8 ms. Each was analyzed with three different drop diameters: drop with the same diameter of the wire electrode; droplet diameter larger drop smaller than the diameter of the wire electrode. For purposes of comparison the same was determined relation between the average current and welding speed was determined generating a constant (Im / Vs = K) for all parameters. Welding in flat plate by simple deposition for the MIG-P with a distance beak contact number (DBCP) constant was perfomed subsequently making up welding in flat plate by simple deposition with an inclination of 10 degrees to vary the DBCP, where by assessment on how the MIG-P behaved in such a situation was possible, in addition to evaluating the MIG-P with adaptive control, in order to maintain a constant arc stability. Also high speed recording synchronized with acquiring current x voltage (oscillogram) was executed for better interpretation of the transfer mechanism and better evaluation in regard to the study of the stability of the process. It is concluded that parameters 3 and 4 exhibited greater versatility; diameters drop equal to or slightly less than the diameter of the wire exhibited better stability due to their higher frequency of detachment, and the detachment of the drop base does not harm the maintenance the height of the arc
Resumo:
Among the main challenges in the beer industrial production is the market supply at the lowest cost and high quality, in order to ensure the expectations of customers and. consumers The beer fermentation stage represents approximately 70% of the whole time necessary to its production, having a obligatoriness of strict process controls to avoid becoming bottleneck in beer production. This stage is responsible for the formation of a series of subproducts, which are responsible for the composition of aroma/bouquet existing in beer and some of these subproducts, if produced in larger quantities, they will confer unpleasant taste and odor to the final product. Among the subproducts formed during the fermentation stage, total vicinal diketones is the main component, since it is limiting for product transfusion to the subsequent steps, besides having a low perception threshold by the consumer and giving undesirable taste and odor. Due to the instability of main raw materials quality and also process controls during fermentation, the development of alternative forms of beer production without impacting on total fermentation time and final product quality is a great challenge to breweries. In this work, a prior acidification of the pasty yeast was carried out, utilizing for that phosphoric acid, food grade, reducing yeast pH of about 5.30 to 2.20 and altering its characteristic from flocculent to pulverulent during beer fermentation. An increase of six times was observed in amount of yeast cells in suspension in the second fermentation stage regarding to fermentations by yeast with no prior acidification. With alteration on two input variables, temperature curve and cell multiplication, which goal was to minimize the maximum values for diketones detected in the fermenter tank, a reduction was obtained from peak of formed diacetyl and consequently contributed to reduction in fermentation time and total process time. Several experiments were performed with those process changes in order to verify the influence on the total fermentation time and total vicinal diketones concentration at the end of fermentation. This experiment reached as the best production result a total fermentation time of 151 hours and total vicinal diketone concentration of 0.08 ppm. The mass of yeast in suspension in the second phase of fermentation increased from 2.45 x 106 to 16.38 x 106 cells/mL of yeast, which fact is key to a greater efficiency in reducing total vicinal diketones existing in the medium, confirming that the prior yeast acidification, as well as the control of temperature and yeast cell multiplication in fermentative process enhances the performance of diketones reduction and consequently reduce the total fermentation time with diketones concentration below the expected value (Max: 0.10 ppm)
Resumo:
A chemical process optimization and control is strongly correlated with the quantity of information can be obtained from the system. In biotechnological processes, where the transforming agent is a cell, many variables can interfere in the process, leading to changes in the microorganism metabolism and affecting the quantity and quality of final product. Therefore, the continuously monitoring of the variables that interfere in the bioprocess, is crucial to be able to act on certain variables of the system, keeping it under desirable operational conditions and control. In general, during a fermentation process, the analysis of important parameters such as substrate, product and cells concentration, is done off-line, requiring sampling, pretreatment and analytical procedures. Therefore, this steps require a significant run time and the use of high purity chemical reagents to be done. In order to implement a real time monitoring system for a benchtop bioreactor, these study was conducted in two steps: (i) The development of a software that presents a communication interface between bioreactor and computer based on data acquisition and process variables data recording, that are pH, temperature, dissolved oxygen, level, foam level, agitation frequency and the input setpoints of the operational parameters of the bioreactor control unit; (ii) The development of an analytical method using near-infrared spectroscopy (NIRS) in order to enable substrate, products and cells concentration monitoring during a fermentation process for ethanol production using the yeast Saccharomyces cerevisiae. Three fermentation runs were conducted (F1, F2 and F3) that were monitored by NIRS and subsequent sampling for analytical characterization. The data obtained were used for calibration and validation, where pre-treatments combined or not with smoothing filters were applied to spectrum data. The most satisfactory results were obtained when the calibration models were constructed from real samples of culture medium removed from the fermentation assays F1, F2 and F3, showing that the analytical method based on NIRS can be used as a fast and effective method to quantify cells, substrate and products concentration what enables the implementation of insitu real time monitoring of fermentation processes
Resumo:
During the process of the salt production, the first the salt crystals formed are disposed of as industrial waste. This waste is formed basically by gypsum, composed of calcium sulfate dihydrate (CaSO4.2H2O), known as carago cru or malacacheta . After be submitted the process of calcination to produce gypsum (CaSO4.0,5H2O), can be made possible its application in cement industry. This work aims to optimize the time and temperature for the process of calcination of the gypsum (carago) for get beta plaster according to the specifications of the norms of civil construction. The experiments involved the chemical and mineralogical characterization of the gypsum (carago) from the crystallizers, and of the plaster that is produced in the salt industry located in Mossoró, through the following techniques: x-ray diffraction (XRD), x-ray fluorescence (FRX), thermogravimetric analysis (TG/DTG) and scanning electron microscopy (SEM) with EDS. For optimization of time and temperature of the process of calcination was used the planning three factorial with levels with response surfaces of compressive mechanical tests and setting time, according norms NBR-13207: Plasters for civil construction and x-ray diffraction of plasters (carago) beta obtained in calcination. The STATISTICA software 7.0 was used for the calculations to relate the experimental data for a statistical model. The process for optimization of calcination of gypsum (carago) occurred in the temperature range from 120° C to 160° C and the time in the range of 90 to 210 minutes in the oven at atmospheric pressure, it was found that with the increase of values of temperature of 160° C and time calcination of 210 minutes to get the results of tests of resistance to compression with values above 10 MPa which conform to the standard required (> 8.40) and that the X-ray diffractograms the predominance of the phase of hemidrato beta, getting a beta plaster of good quality and which is in accordance with the norms in force, giving a by-product of the salt industry employability in civil construction
Resumo:
The interdisciplinary nature of Astronomy makes it a field of great potential to explore various scientific concepts. However, studies show a great lack of understanding of fundamental subjects, including models that explain phenomena that mark everyday life, like the phases of the moon. Particularly in the context of distance education, learning of such models can be favored by the use of technologies of information and communication. Among other possibilities, we highlight the importance of digital materials that motivate and expand the forms of representation available about phenomena and models. It is also important, however, that these materials promote the explicitation of student's conceptions, as well as interaction with the most central aspects of the astronomical model for the phenomenon. In this dissertation we present a hypermedia module aimed at learning about the phases of the moon, drawn from an investigation on the difficulties with the subject during an Astronomy course for teaching training at undergraduate level at UFRN. The tests of three semesters of course were analyzed, taking into account also the alternative conceptions reported in the literature in astronomy education. The product makes use of small texts, questions, images and interactive animations. Emphasizes questions about the illumination of the Moon and other bodies, and their relationship to the sun, the perception from different angles of objects illuminated by a single source, the cause of the alternation between day and night, the identification of Moon's orbit around the Earth and the occurrence of the phases as a result of the position of observing it, and the perception of time involved in the phenomenon. The module incorporated considerations obtained from interviews with students in two poles where its given presential support for students of the course, and subjects from different pedagogical contexts. The final form of the material was used in a real situation of learning, as supplementary material for the final test of the discipline. The material was analyzed by 7 students and 4 tutors, among 56 users, in the period in question. Most students considered that the so called "Lunar Module" made a difference in their learning, the animations were considered the most prominent aspect, the images were indicated as stimulating and enlightening, and the text informative and enjoyable. The analysis of learning of these students, observing their responses to issues raised at the last evaluation, suggested gains in key aspects relating to the understanding of the phases, but also indicates more persistent difficulties. The work leads us to conclude that it is important to seek contributions for the training of science teachers making use of new technologies, with attention to the treatment of computer as a complementary resource. The interviews that preceded the use of the module, and the way student has sought the module if with questions and/or previous conflicts - established great difference in the effective contribution of the material, indicating that it should be used with the mediation of teacher or tutor, or via strategies that cause interactions between students. It is desirable that these interactions are associated with the recovery of memories of the subjects about previous observations and models, as well as the stimulus to new observations of phenomena
Resumo:
This dissertation proposes studying the issue of withdrawal undergraduate in physics at the Instituto Federal de Educação, Ciência e Tecnologia do Rio Grande do Norte (IFRN) and collaborate with suggestions for dealing with this problem. The first chapter begins with an overview of two significant problems in the Brazilian educational system: the high dropout rates in degrees in physics and the lack of teachers with specific training in this science. Then, we discuss the relevance of this research to the area of physics teaching, as well as justify its completion as part of a professional master's degree. After, we present a proper definition for the term withdrawal, which is based on the existing problem in the IFRN. And, in the same chapter, we explicitly the focus, the objectives and the methodological aspects of this work. The results obtained in our investigation are presented in next four chapters. In the second chapter of this dissertation, we present: a brief history of the creation of IFRN degree in physics, the functioning of this course and the foundation of classrooms 2004.2 and 2006.1. We also show a kind of map of the withdrawal of the groups investigated (the dropout rate was 84.4% in both groups) and an analysis of the relationship between the curricula of each of them and the number of dropouts. In the third chapter, we display a descriptive statistics of the students which dropout and found that the largest dropout occurred with students who are women, married, parents of one kid; workers, joined with a minimum age of 23 years and completed high school at least 6 years. Then in the fourth chapter, we reveal and discuss the students' reports on the causes of their dropout. From the data presented, we can say that the answer to the question "What was the main reason for your dropout?" Is mainly in personal injury claims: another option for upper-level course and lack of time to devote to the course. In the fifth chapter, we show the results related to teacher s opinions about the phenomenon in question. We detected three main causes for the abandonment, according to teachers: the lack of dedication, the lack of interest and lack of integration in the course. In the sixth and final chapter, we discuss the results and present our conclusion and the proposed report - the product of this dissertation, presented as Annex. This report contains mainly suggestions for curricular and institutional actions that can contribute to reducing the dropout degree in Physics in the IFRN. The main actions suggested are: implementation of the curriculum in disciplines, implementation of programs or actions to combat this poor content of basic training, implementation of specific programs or actions for the student worker, and dissemination of IFRN degree in physics in schools through seminars or workshops
Resumo:
This dissertation aims at characterizing the practices as well as the effects of a teacher s feedback in oral conversation interaction with students in an English Language classroom at a Primary School, 6th Grade in Açu/RN, Brazil. Therefore, this study is based on Vygotsky s (1975) and Bruner´s (1976) researches, which state that the learning process is constructed through interaction between a more experienced individual (teacher, parents and friends) and a learner who plays an active role, a re-constructor of knowledge. It is also based on Ur´s (2006) and Brookhart s (2008) studies (among other authors in Applied Linguistic) who defend that the feedback process needs to be evaluative and formative since it sets interfaces with both students autonomy and learning improvement. Our study is based on qualitative, quantitative and interpretive researches, whose natural environment (the classroom) is a direct source of data generated in this research through field observations/note-taking as well as through the transcriptions of five English classes audio taped. This study shows the following results: the teacher still seems to accept the patterns of interaction in the classroom that correspond to the IRE process (Initiation, Response, Evaluation) in behaviorist patterns: (1) he speaks and determines the turns of speech; (2) the teacher asks more questions and directs the activities most of the time; (3) the teacher´s feedback presents the following types: questioning, modeling, repeated response, praise, depreciation, positive/negative and sarcasm feedback, whose functions are to assess students' performance based on the rightness and wrongness of their responses. Thus, this implies to state that the feedback does not seem to help students improvement in terms of acquiring knowledge because of its normative effects/roles. Therefore, it is the teacher´s role to give evaluative and formative feedback to a student so that he/she should advance in the learning of the language and in the construction of knowledge
Resumo:
Este trabalho tem como foco principal a interação em sala de aula, especificando aspectos da organização linguístico-discursiva, na produção conjunta da fala da professora e dos alunos, materializada em turnos, ressaltando o par pergunta-resposta na aula de Língua Portuguesa. Para alcançarmos esse objetivo, inspiramo-nos em alguns trabalhos acerca da organização da interação que adotaram a perspectiva dos estudos interacionais e a abordagem etnográfica, a fim de explicitar o conhecimento nos espaços de ensino e aprendizagem. Entre eles, citamos as pesquisas de Galvão (1996, 2004) e de Matêncio (2001). Nessa direção, descrevemos o processo de interação em sala de aula em uma escola pública, analisando e interpretando as ações de linguagem realizadas pela professora e pelos alunos. Teoricamente, embasamo-nos, principalmente, na Análise da Conversação, ancorando-nos no estudo pioneiro de Sacks, Schegloff e Jefferson ([1974] 2003); nos postulados de Marcuschi ([1986] 2007a); nas pesquisas de Kerbrat-Orecchioni (2006), dentre outros. Explicitamos uma tipologia de perguntas e respostas em sala de aula, quanto à sua forma e função, conforme os postulados teóricos de Stubbs (1987), Araújo (2003), Fávero, Andrade e Aquino (2006), Silva (2006) e Koshik (2010). Analisamos a organização da tomada de turno, seguida de uma investigação sobre perguntas e respostas no discurso desenvolvido face a face. Na tentativa de compreendermos o cotidiano dos envolvidos no cenário de sala de aula, adotamos a abordagem etnográfica e o método indutivo, nas perspectivas de André (2010) e Chizzotti (2006). Os dados foram gerados através de pesquisa de campo, por meio de gravações (em áudio) de aulas de Língua Portuguesa, posteriormente transcritas e transformadas no corpus de pesquisa. As análises demonstraram que a interação entre professora e alunos organizou-se em trocas de turnos, na maioria das vezes, controladas pela professora, evidenciando-se uma relação de assimetria entre os participantes. Esses turnos concretizados, geralmente, no par adjacente pergunta-resposta revelaram como a construção do conhecimento se realiza em sala de aula. Por fim, observamos que a interação em sala de aula de Língua Portuguesa é organizada por aspectos sociais e pedagógicos intrinsecamente imbricados
Resumo:
This research aims to understand the relationship between media, capitalism and ownership of free time for leisure practices in industrial societies and postindustrial. Searching is thus a conceptual framework that takes into account the kind of ideology that naturalizes the relationship of leisure with the foundations of contemporary media, and the media only with leisure, forgetting their insertion in the labor and industrial relations in society. We intend to demonstrate that every mode of production, in the capitalist system, entails a mode of reproduction. Methodologically, this is a first approximation, from theoretical concerns already performed, constituting a theoretical research, bibliographic and descriptive character. The results of the text drives us to the conclusion that the work and leisure spheres tend to be less and less differentiated, since both remain as activities of product management with the same intellective protocols, based on information and communication technology, and that accordingly, the media favors an expansion of productive activity even during leisure time
Resumo:
La recherche presentée, realisée sur le domaine de la méthaphysique, s´agit de rassembler des pressupositions pour une fondamentation ontologique de la technologie de l´Information, basé sur la philophie de Martin Heidegger; foncièrement, sur l´analytique existentiel du Dasein dans l´ouvrage Être et Temps. À partir de la pensée sur ce qui est aujourd´hui , il s´agit d´investiguer sur quels fondaments la Nouvelle Tecnologie se fut érigée de façon a que nous sommes engajés au projet de numérisation des étants que en même temps que destine l´homme a l´oubli de l´Être, l´offre la possibilité de transformation. Le rapport entre la question de l´Être et la question de la technique est analysé comme des chemins croisés et dans ce carrefour il devient possible penser ce qui est technique, ce qui est information pour Heidegger et de quel façon les modes existentiels du Dasein sont prêtes pour caractériser l ´homme au sein de la tecnologie de l´information. Par cette appropriation, il reste penser comment c´est possible l´ouverture d´une perspective de reconduction de l´homme à la vérité de l´Être. Finalement, la structuration des fondements rends possible la réflexion discursive général: avec qui nous nous ocuppons, comme nous sommes, dans quelle direction nous nous acheminons, les thèmes générales, respectivement, des trois chapitres. Les points d´investigation du premier chapitre son: a) La caractérisation précise du Dasein, appuyé sur des considerations de Benedito Nunes, Hans-Georg Gadamer, Jacques Derrida et Rüdiger Safränski; b) Le concept de technique et son essence chez Heidegger; c) la distinction entre technique et technologie, appuyé sur le pensée de J. Ellul, Michel Séris, Otto Pöggeler, Michel Haar, Dominique Janicaud; c) Le concept de cibernetique chez Heidegger et chez Norbert Wiener; d) La caractérisation preliminaire d´information, l´analyse étimologique e philosophique, l´avis de Heidegger te les théories de Rafael Capurro; f) L´Analyse du phénomène de la numérisation des étants, des considérations de Virilio, et l´analyse d´un concept de virtuel avec Henri Bergson et Gilles Deleuze. Dans le deuxième chapitre, l´analyse des existentiels du Dasein vers le sommaire des fondements de base pour la caractérisation de la technologie de l´information comme un problème philosophique. Finalement, aprés avoir presenté les concepts introdutoires que délimitent le questionement, suivi par les indications et pressupositions ontologiques trouvés sur Être et Temps, le troisième chapitre disserte sur le péril, ce qui sauve et la sérénité, les trois mots-clés de la pensée heideggerienne sur la technique que permettent l´approche conclusif de la question