856 resultados para Algoritmos experimentais
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms
Resumo:
A modelagem de processos industriais tem auxiliado na produção e minimização de custos, permitindo a previsão dos comportamentos futuros do sistema, supervisão de processos e projeto de controladores. Ao observar os benefícios proporcionados pela modelagem, objetiva-se primeiramente, nesta dissertação, apresentar uma metodologia de identificação de modelos não-lineares com estrutura NARX, a partir da implementação de algoritmos combinados de detecção de estrutura e estimação de parâmetros. Inicialmente, será ressaltada a importância da identificação de sistemas na otimização de processos industriais, especificamente a escolha do modelo para representar adequadamente as dinâmicas do sistema. Em seguida, será apresentada uma breve revisão das etapas que compõem a identificação de sistemas. Na sequência, serão apresentados os métodos fundamentais para detecção de estrutura (Modificado Gram- Schmidt) e estimação de parâmetros (Método dos Mínimos Quadrados e Método dos Mínimos Quadrados Estendido) de modelos. No trabalho será também realizada, através dos algoritmos implementados, a identificação de dois processos industriais distintos representados por uma planta de nível didática, que possibilita o controle de nível e vazão, e uma planta de processamento primário de petróleo simulada, que tem como objetivo representar um tratamento primário do petróleo que ocorre em plataformas petrolíferas. A dissertação é finalizada com uma avaliação dos desempenhos dos modelos obtidos, quando comparados com o sistema. A partir desta avaliação, será possível observar se os modelos identificados são capazes de representar as características estáticas e dinâmicas dos sistemas apresentados nesta dissertação
Resumo:
Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second
Resumo:
The great importance in selecting the profile of an aircraft wing concerns the fact that its relevance in the performance thereof; influencing this displacement costs (fuel consumption, flight level, for example), the conditions of flight safety (response in critical condition) of the plane. The aim of this study was to examine the aerodynamic parameters that affect some types of wing profile, based on wind tunnel testing, to determine the aerodynamic efficiency of each one of them. We compared three types of planforms, chosen from considerations about the characteristics of the aircraft model. One of them has a common setup, and very common in laboratory classes to be a sort of standard aerodynamic, it is a symmetrical profile. The second profile shows a conFiguration of the concave-convex type, the third is also a concave-convex profile, but with different implementation of the second, and finally, the fourth airfoil profile has a plano-convex. Thus, three different categories are covered in profile, showing the main points of relevance to their employment. To perform the experiment used a wind tunnel-type open circuit, where we analyzed the pressure distribution across the surface of each profile. Possession of the drag polar of each wing profile can be, from the theoretical basis of this work, the aerodynamic characteristics relate to the expected performance of the experimental aircraft, thus creating a selection model with guaranteed performance aerodynamics. It is believed that the philosophy used in this dissertation research validates the results, resulting in an experimental alternative for reliable implementation of aerodynamic testing in models of planforms
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The nonionic surfactants when in aqueous solution, have the property of separating into two phases, one called diluted phase, with low concentration of surfactant, and the other one rich in surfactants called coacervate. The application of this kind of surfactant in extraction processes from aqueous solutions has been increasing over time, which implies the need for knowledge of the thermodynamic properties of these surfactants. In this study were determined the cloud point of polyethoxylated surfactants from nonilphenolpolietoxylated family (9,5 , 10 , 11, 12 and 13), the family from octilphenolpolietoxylated (10 e 11) and polyethoxylated lauryl alcohol (6 , 7, 8 and 9) varying the degree of ethoxylation. The method used to determine the cloud point was the observation of the turbidity of the solution heating to a ramp of 0.1 ° C / minute and for the pressure studies was used a cell high-pressure maximum ( 300 bar). Through the experimental data of the studied surfactants were used to the Flory - Huggins models, UNIQUAC and NRTL to describe the curves of cloud point, and it was studied the influence of NaCl concentration and pressure of the systems in the cloud point. This last parameter is important for the processes of oil recovery in which surfactant in solution are used in high pressures. While the effect of NaCl allows obtaining cloud points for temperatures closer to the room temperature, it is possible to use in processes without temperature control. The numerical method used to adjust the parameters was the Levenberg - Marquardt. For the model Flory- Huggins parameter settings were determined as enthalpy of the mixing, mixing entropy and the number of aggregations. For the UNIQUAC and NRTL models were adjusted interaction parameters aij using a quadratic dependence with temperature. The parameters obtained had good adjust to the experimental data RSMD < 0.3 %. The results showed that both, ethoxylation degree and pressure increase the cloudy points, whereas the NaCl decrease
Resumo:
Recent research has revealed that the majority of Biology teachers believe the practice of experimental activities as a didactical means would be the solution for the improvement of the Biology teaching-learning process. There are, however, studies which signal the lack of efficiency in such practice lessons as far as building scientific knowledge is concerned. It is also said that despite the enthusiasm on the teachers‟ part, such classes are rarely taught in high school. Several studies point pedagogical difficulties as well as nonexistence of a minimal infrastructure needed in laboratories as cause of low frequency in experimental activities. The poor teacher performance in terms of planning and development of classes; the large number of students per class; lack of financial stimulus for teachers are other reasons to be taken into account among others, in which can also be included difficulties of epistemological nature. That means an unfavorable eye of the teacher towards experimental activities. Our study aimed to clarify if such scenario is generalized in high schools throughout the state of Rio Grande do Norte Brazil. During our investigation a sample of twenty teaching institutions were used. They were divided in two groups: in the first group, five IFRN- Instituto Federal de Educação, Ciência e Tecnologia do Rio Grande do Norte schools. Two of those in Natal, and the other three from the country side. The second group is represented by fifteen state schools belonging to the Natal metropolitan area. The objectives of the research were to label schools concerning laboratory facilities; to identify difficulties pointed by teachers when performing experiment classes, and to become familiar with the conceptions of the teachers in regarding biology experiment classes. To perform such task, a questionnaire was used as instrument of data collecting. It contained multiple choice, essay questions and a semi-structured interview with the assistance of a voice recorder. The data analysis and the in loco observation allowed the conclusion that the federal schools do present better facilities for the practice of experimental activities when compared to state schools. Another aspect pointed is the fact that teachers of federal schools have more time available for planning the experiments; they are also better paid and are given access a career development, which leads to better salaries. All those advantages however, do not show a significantly higher frequency regarding the development of experiments when compared to state school teachers. Both teachers of federal and state schools pointed infra-structure problems such as the availability of reactants, equipments and consumption supplies as main obstacle to the practice of experiments in biology classes. Such fact leads us to conclude that maybe there are other problems not covered by the questionnaire such as poor ability to plan and execute experimental activities. As far as conceptions about experimental activities, it was verified in the majority of the interviewees a inductive-empiric point of view of science possibly inherited during their academic formation and such point of view reflected on the way they plan and execute experiments with students
Resumo:
El trabajo práctico experimental en la educación en Ciencias y en el contexto de la enseñanza de la Biología es un objetivo clave. Teniendo en cuenta la influencia de los libros de texto en la actividad profesional de docentes, interesado en este estudio para caracterizar la orientación de este tipo de materiales, para el trabajo práctico experimental y el uso de la medición en esta actividad. Para ello analizaron las ocho colecciones de libros de Biología aprobado en PNLD en 2012. La investigación es de naturaleza descriptiva e interpretativa y de recogida de datos ha sido elegida por el método de análisis de contenido (BARDIN, 2002). El análisis de los trabajos práctico experimentales buscó caracterizar su naturaleza epistemológica, concepción de la ciencia implícita, conceptual, tipología, contenido conceptual de Biología utilizado, cuántos implican medidas, y cómo se utiliza en este contexto según el procedimiento general para medir propuesto por Núñez y Silva (2008). Los trabajo práctico experimental ha sugerido una epistemología de la enseñanza conceptual, carácter racionalista en su mayoría y dominada por las actividades del tipo de ejercicio práctico, a necesidad de medir está presente en una minoría de estos y se se utiliza el procedimiento general de medición de forma parcial e implícito en la mayoría de los trabajos prático experimentales. Por lo tanto, proponemos en este actividades de estudio a desarrollar una guía de análisis de los trabajo práctico experimentales propuesto en los libros de texto de Biología
Resumo:
Várias amostras de solo do Brasil foram semeadas em placas de ágar e diversas cepas de actinomicetos produtoras de antibióticos antifúngicos foram isoladas. Foram desenvolvidos meios para eliciação da biossíntese dos antibióticos e métodos para determinação rápida do seu rendimento. Ao todo, foram isoladas 41 cepas de actinomicetos aeróbios produtoras de metabólitos antifúngicos. Destes, 11 (26,8%) eram macrolídeos tetraênicos, 13 (31,7%) macrolídeos pentaênicos, 1 (2,4%), macrolídeo oxopentaênico, 1 (2,4%) macrolídeo hexaênico e 6 (14,6%) macrolídeos heptaênicos. Os antibióticos antifúngicos produzidos pelas restantes 9 cepas ativas (21,9%) não eram poliênicos. Os poliênicos mais utilizados atualmente na clínica são do tipo tetraênico (nistatina) e heptaênico (anfotericina B). Um meio à base de leite de soja favoreceu extraordinariamente a eliciação da biossíntese de polienos por algumas cepas, enquanto que para outras não houve favorecimento e para outras foi prejudicial. Os rendimentos obtidos atingiram cerca de 6000 U de antibióticos poliênicos por mL.
Resumo:
In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.
Resumo:
Universidade Federal do Rio Grande do Norte
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents
Resumo:
Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity