108 resultados para tempo, aspecto e modalidade
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
The incorporate of industrial automation in the medical are requires mechanisms to safety and efficient establishment of communication between biomedical devices. One solution to this problem is the MP-HA (Multicycles Protocol to Hospital Automation) that down a segmented network by beds coordinated by an element called Service Provider. The goal of this work is to model this Service Provider and to do performance analysis of the activities executed by in establishment and maintenance of hospital networks
Resumo:
The public illumination system of Natal/RN city presents some recurring problems in the aspect of monitoring, since currently is not possible to detect in real time the light bulbs which are on throughout the day, or those which are off or burned out, at night. These factors depreciate the efficiency of the services provided, as well as, the use of energetic resources, because there is energetic waste and, consequently, financial resources that could be applied at the own public system illumination. The purpose of the work is create a prototype in substitution to the currently photoelectric relays used at public illumination, that have the same function, as well others: turn on or off the light bulbs remotely (control flexibility by the use of specifics algorithms supervisory), checking the light bulbs status (on or off) and wireless communication with the system through the ZigBee® protocol. The development steps of this product and the tests carried out are related as a way to validate and justify its use at the public illumination
Resumo:
Several mobile robots show non-linear behavior, mainly due friction phenomena between the mechanical parts of the robot or between the robot and the ground. Linear models are efficient in some cases, but it is necessary take the robot non-linearity in consideration when precise displacement and positioning are desired. In this work a parametric model identification procedure for a mobile robot with differential drive that considers the dead-zone in the robot actuators is proposed. The method consists in dividing the system into Hammerstein systems and then uses the key-term separation principle to present the input-output relations which shows the parameters from both linear and non-linear blocks. The parameters are then simultaneously estimated through a recursive least squares algorithm. The results shows that is possible to identify the dead-zone thresholds together with the linear parameters
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Hard metals are the composite developed in 1923 by Karl Schröter, with wide application because high hardness, wear resistance and toughness. It is compound by a brittle phase WC and a ductile phase Co. Mechanical properties of hardmetals are strongly dependent on the microstructure of the WC Co, and additionally affected by the microstructure of WC powders before sintering. An important feature is that the toughness and the hardness increase simultaneously with the refining of WC. Therefore, development of nanostructured WC Co hardmetal has been extensively studied. There are many methods to manufacture WC-Co hard metals, including spraying conversion process, co-precipitation, displacement reaction process, mechanochemical synthesis and high energy ball milling. High energy ball milling is a simple and efficient way of manufacturing the fine powder with nanostructure. In this process, the continuous impacts on the powders promote pronounced changes and the brittle phase is refined until nanometric scale, bring into ductile matrix, and this ductile phase is deformed, re-welded and hardened. The goal of this work was investigate the effects of highenergy milling time in the micro structural changes in the WC-Co particulate composite, particularly in the refinement of the crystallite size and lattice strain. The starting powders were WC (average particle size D50 0.87 μm) supplied by Wolfram, Berglau-u. Hutten - GMBH and Co (average particle size D50 0.93 μm) supplied by H.C.Starck. Mixing 90% WC and 10% Co in planetary ball milling at 2, 10, 20, 50, 70, 100 and 150 hours, BPR 15:1, 400 rpm. The starting powders and the milled particulate composite samples were characterized by X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) to identify phases and morphology. The crystallite size and lattice strain were measured by Rietveld s method. This procedure allowed obtaining more precise information about the influence of each one in the microstructure. The results show that high energy milling is efficient manufacturing process of WC-Co composite, and the milling time have great influence in the microstructure of the final particles, crushing and dispersing the finely WC nanometric order in the Co particles
Resumo:
Given the growing environmental crisis caused by degradation, mainly due to the use of polluting energy sources, increasing the growing use of renewable energies worldwide, with emphasis on solar energy, an abundant supply and available to everyone, which can be harnessed in several ways: electricity generation; dehydration of food; heating, disinfection and distillation and cooking. The latter has as its primary feature the viability of clean, renewable energy for society, combating ecological damage caused by large-scale use of firewood for cooking foods, use in tropical countries with high solar radiation, and has funding NGOs throughout the world with the goal of achieving low-income population. The proposed project consists of a solar cooker for concentration, working from the reflection of sunlight by a hub that they converge to a focal point at the bottom of the pot, getting lots of heat. The solar cooker under study consists of two elliptical reflecting parabolas made from the recycling of scrap TV antenna, having 0.29 m² of surface area for each antenna, which were covered by multiple mirrors of 2 mm thick and mounted on a metal structure, with correction for the mobility of the apparent movement of the sun. This structure was built with the recycling of scrap metal, possessing a relatively low cost compared with other solar cookers, around US$ 50.00. This cost becomes negligible, since that will involve a great benefit to not have fuel costs for each meal, unlike the use of gas or firewood for cooking food. The tests show that the cooker has reached the maximum temperature of 740 ° C, for boiling water in an average time of 28 minutes, cooking various types of foods such as potatoes, rice and pasta in an average time of 45 minutes and still going as a solar oven, making pizza baking and meat. These cooking times do not differ much from the cooking times on a gas stove, it becomes the solar cooker as a good consumer acceptance, and furthermore not to deliver the same gases that can poison the food as with the wood stove. Proves the viability of using the stove to cook or bake in two daily meals for a family, still presenting a position to improve his performance with the addition of new materials, equipment and techniques
Resumo:
To obtain a process stability and a quality weld bead it is necessary an adequate parameters set: base current and time, pulse current and pulse time, because these influence the mode of metal transfer and the weld quality in the MIG-P, sometimes requiring special sources with synergistic modes with external control for this stability. This work aims to analyze and compare the effects of pulse parameters and droplet size in arc stability in MIG-P, four packets of pulse parameters were analysed: Ip = 160 A, tp = 5.7 ms; Ip = 300 A and tp = 2 ms, Ip = 350 A, tp = 1.2 ms and Ip = 350 A, tp = 0.8 ms. Each was analyzed with three different drop diameters: drop with the same diameter of the wire electrode; droplet diameter larger drop smaller than the diameter of the wire electrode. For purposes of comparison the same was determined relation between the average current and welding speed was determined generating a constant (Im / Vs = K) for all parameters. Welding in flat plate by simple deposition for the MIG-P with a distance beak contact number (DBCP) constant was perfomed subsequently making up welding in flat plate by simple deposition with an inclination of 10 degrees to vary the DBCP, where by assessment on how the MIG-P behaved in such a situation was possible, in addition to evaluating the MIG-P with adaptive control, in order to maintain a constant arc stability. Also high speed recording synchronized with acquiring current x voltage (oscillogram) was executed for better interpretation of the transfer mechanism and better evaluation in regard to the study of the stability of the process. It is concluded that parameters 3 and 4 exhibited greater versatility; diameters drop equal to or slightly less than the diameter of the wire exhibited better stability due to their higher frequency of detachment, and the detachment of the drop base does not harm the maintenance the height of the arc
Resumo:
The proposed design provides a solar furnace alternative, box-like, low-cost operation to be used in cooking, comprising three scrap tires to make the recycling thereof. The tires were coupled to each other, forming an enclosure, which stood on its bottom covered by a parable multiple mirrors made from a urupema (sieve indigenous) and the inner sides of the oven aluminum sheet painted black, obtained from beer cans, thus being made to obtain the increase in the concentration of solar radiation incident on the inside of the prototype studied. Two tires were attached, leaving an air layer between them, with the function of thermal insulation. The third tire aimed to support the other two and thermally insulate the bottom of the oven. Externally was placed a metal frame with flat mirrors to reflect the incident rays into the oven, having a mobility to correct the apparent motion of the sun. Its primary feature is the viability of clean, renewable energy to society by tackling the ecological damage caused by the large-scale use of wood for cooking food. The tests show that the furnace reached the maximum temperature of 123.8 °C and baking various foods such as pizza, bun, and other lasagne in an average time 50 minutes. Proves the feasibility of using the oven. Presenting still able to improve their performance with the addition of new materials, equipment and techniques
Resumo:
Among the main challenges in the beer industrial production is the market supply at the lowest cost and high quality, in order to ensure the expectations of customers and. consumers The beer fermentation stage represents approximately 70% of the whole time necessary to its production, having a obligatoriness of strict process controls to avoid becoming bottleneck in beer production. This stage is responsible for the formation of a series of subproducts, which are responsible for the composition of aroma/bouquet existing in beer and some of these subproducts, if produced in larger quantities, they will confer unpleasant taste and odor to the final product. Among the subproducts formed during the fermentation stage, total vicinal diketones is the main component, since it is limiting for product transfusion to the subsequent steps, besides having a low perception threshold by the consumer and giving undesirable taste and odor. Due to the instability of main raw materials quality and also process controls during fermentation, the development of alternative forms of beer production without impacting on total fermentation time and final product quality is a great challenge to breweries. In this work, a prior acidification of the pasty yeast was carried out, utilizing for that phosphoric acid, food grade, reducing yeast pH of about 5.30 to 2.20 and altering its characteristic from flocculent to pulverulent during beer fermentation. An increase of six times was observed in amount of yeast cells in suspension in the second fermentation stage regarding to fermentations by yeast with no prior acidification. With alteration on two input variables, temperature curve and cell multiplication, which goal was to minimize the maximum values for diketones detected in the fermenter tank, a reduction was obtained from peak of formed diacetyl and consequently contributed to reduction in fermentation time and total process time. Several experiments were performed with those process changes in order to verify the influence on the total fermentation time and total vicinal diketones concentration at the end of fermentation. This experiment reached as the best production result a total fermentation time of 151 hours and total vicinal diketone concentration of 0.08 ppm. The mass of yeast in suspension in the second phase of fermentation increased from 2.45 x 106 to 16.38 x 106 cells/mL of yeast, which fact is key to a greater efficiency in reducing total vicinal diketones existing in the medium, confirming that the prior yeast acidification, as well as the control of temperature and yeast cell multiplication in fermentative process enhances the performance of diketones reduction and consequently reduce the total fermentation time with diketones concentration below the expected value (Max: 0.10 ppm)
Resumo:
A chemical process optimization and control is strongly correlated with the quantity of information can be obtained from the system. In biotechnological processes, where the transforming agent is a cell, many variables can interfere in the process, leading to changes in the microorganism metabolism and affecting the quantity and quality of final product. Therefore, the continuously monitoring of the variables that interfere in the bioprocess, is crucial to be able to act on certain variables of the system, keeping it under desirable operational conditions and control. In general, during a fermentation process, the analysis of important parameters such as substrate, product and cells concentration, is done off-line, requiring sampling, pretreatment and analytical procedures. Therefore, this steps require a significant run time and the use of high purity chemical reagents to be done. In order to implement a real time monitoring system for a benchtop bioreactor, these study was conducted in two steps: (i) The development of a software that presents a communication interface between bioreactor and computer based on data acquisition and process variables data recording, that are pH, temperature, dissolved oxygen, level, foam level, agitation frequency and the input setpoints of the operational parameters of the bioreactor control unit; (ii) The development of an analytical method using near-infrared spectroscopy (NIRS) in order to enable substrate, products and cells concentration monitoring during a fermentation process for ethanol production using the yeast Saccharomyces cerevisiae. Three fermentation runs were conducted (F1, F2 and F3) that were monitored by NIRS and subsequent sampling for analytical characterization. The data obtained were used for calibration and validation, where pre-treatments combined or not with smoothing filters were applied to spectrum data. The most satisfactory results were obtained when the calibration models were constructed from real samples of culture medium removed from the fermentation assays F1, F2 and F3, showing that the analytical method based on NIRS can be used as a fast and effective method to quantify cells, substrate and products concentration what enables the implementation of insitu real time monitoring of fermentation processes
Resumo:
During the process of the salt production, the first the salt crystals formed are disposed of as industrial waste. This waste is formed basically by gypsum, composed of calcium sulfate dihydrate (CaSO4.2H2O), known as carago cru or malacacheta . After be submitted the process of calcination to produce gypsum (CaSO4.0,5H2O), can be made possible its application in cement industry. This work aims to optimize the time and temperature for the process of calcination of the gypsum (carago) for get beta plaster according to the specifications of the norms of civil construction. The experiments involved the chemical and mineralogical characterization of the gypsum (carago) from the crystallizers, and of the plaster that is produced in the salt industry located in Mossoró, through the following techniques: x-ray diffraction (XRD), x-ray fluorescence (FRX), thermogravimetric analysis (TG/DTG) and scanning electron microscopy (SEM) with EDS. For optimization of time and temperature of the process of calcination was used the planning three factorial with levels with response surfaces of compressive mechanical tests and setting time, according norms NBR-13207: Plasters for civil construction and x-ray diffraction of plasters (carago) beta obtained in calcination. The STATISTICA software 7.0 was used for the calculations to relate the experimental data for a statistical model. The process for optimization of calcination of gypsum (carago) occurred in the temperature range from 120° C to 160° C and the time in the range of 90 to 210 minutes in the oven at atmospheric pressure, it was found that with the increase of values of temperature of 160° C and time calcination of 210 minutes to get the results of tests of resistance to compression with values above 10 MPa which conform to the standard required (> 8.40) and that the X-ray diffractograms the predominance of the phase of hemidrato beta, getting a beta plaster of good quality and which is in accordance with the norms in force, giving a by-product of the salt industry employability in civil construction
Resumo:
A exploração de petróleo está a cada dia em circunstâncias mais adversas, no que diz respeito à profundidade dos poços como também, em relação à fluidez do óleo. Os reservatórios de descobertas recentes não possuem energia própria para produzir ou os métodos convencionais não são eficientes para fazer com que esses reservatórios tenham uma vida útil elevada, devido a alterações das propriedades físico-químicas, como por exemplo a viscosidade, que torna o deslocamento do óleo pelos poros do reservatório até a superfície cada vez mais complexo. O presente trabalho tem como objetivo estudar a preparação, caracterização e a utilização de nanoemulsões obtidas a partir de sistemas microemulsionados, com e sem a presença de polímero. Esses sistemas foram aplicados como método químico de recuperação de petróleo, com o intuito de obter maior eficiência de volume de óleo deslocado. O interesse por esse tipo de sistema existe devido a sua baixa tensão superficial, o pequeno tamanho de gotícula e, principalmente, pelo baixo percentual de matéria ativa presente em sua composição. Os ensaios realizados para caracterizar esses sistemas foram: aspecto físico, medidas de tamanho de gotícula, índice de polidispersão, tensão superficial, pH e condutividade. Ensaios de reologia e de adsorção dos sistemas foram realizados com o objetivo de avaliar sua influencia na recuperação de petróleo. Os ensaios de recuperação foram realizados em um equipamento que simula as condições de um reservatório de petróleo, utilizando plugs de rocha arenito Botucatu. Esses plugs foram saturados com salmoura (KCl 2%) e com petróleo proveniente da Bacia Potiguar do campo de Ubarana. Após essas etapas foi realizada a recuperação convencional utilizando a salmoura e, por último, foi injetada, a nanoemulsão, como método de recuperação avançada. Os sistemas obtidos variaram de 0% à 0,4% de polímero. Os ensaios de tamanhos de partícula obtiveram como resultado uma variação de 9,22 a 14,8 nm, caracterizando que as nanoemulsões estão dentro da faixa de tamanho inerente a esse tipo de sistema. Para ensaios de tensão superficial os valores foram na faixa de 33,6 a 39,7 dynas/cm, valores semelhantes à microemulsões e bem abaixo da tensão superficial da água. Os resultados obtidos para os valores de pH e condutividade se mantiveram estáveis ao longo do tempo de armazenamento, essa avaliação indica estabilidade das nanoemulsões estudadas. O teste de recuperação avançada utilizando nanoemulsão com baixo percentual de matéria ativa obteve como resultado de eficiência de deslocamento 39,4%. Porém esse valor foi crescente, de acordo com o aumento do percentual de polímero na nanomeulsão. Os resultados de eficiência de deslocamento de petróleo estão diretamente relacionados com o aumento da viscosidade das nanoemulsões. A nanoemulsão V (0,4% polímero) é o sistema mais viscoso dentre os analisados, e obteve o maior percentual de óleo deslocado (76,7%), resultando na maior eficiência de deslocamento total (90%). Esse estudo mostrou o potencial de sistemas nanoemulsionados, com e sem polímeros, na recuperação avançada de petróleo. Eles apresentam algumas vantagens com relação a outros métodos de recuperação avançada, como: o baixo percentual de matéria ativa, baixo índice de adsorção do polímero, dissolvido em nanoemulsão, na rocha e alta eficiência de recuperação
Resumo:
The National Program of Professional Education Integration with Basic Education for Youngsters and Adults (PROEJA) Technical Professional Education Ensino Médio has opened a new chapter in the history of education in Brazil, making possible the integration of basic education and professional education. This new form of education, which is still in its early implementation, presents a series of challenges to be overcome. Specifically about the teaching of Chemistry, didactic material to match PROEJA s specific needs is practically inexistent. Thus, this work has the purpose of developing didactic material for the teaching of Chemistry for Professional and Technological Education of Youngsters and Adults in the courses of Electronics, Technical Electronics and Maintenance and Support for Computing at Instituto Federal de Educação,Ciência e Tecnologia do Rio Grande do Norte. This material aims at working chemical concepts of oxi-reduction reactions through a theme approach following Freire s conceptions for the teaching of Youngsters and Adults