995 resultados para Monitoramento e Aprendizado Estratégico
Resumo:
In this work, we propose a methodology for teaching robotics in elementary schools, based on the socio-historical Vygotsky theory. This methodology in conjunction with the Lego Mindstoms kit (R) and an educational software (an interface for control and programming of prototypes) are part of an educational robotics system named RoboEduc. For the practical development of this work, we have used the action-research strategy, being realized robotics activities with participation of children with age between 8 and 10 years, students of the elementary school level of Municipal School Ascendino de Almeida. This school is located at the city zone of Pitimbu, at the periphery of Natal, in Rio Grande do Norte state. The activities have focused on understanding the construction of robotic prototypes, their programming and control. At constructing prototypes, children develop zone of proximal development (ZPDs) that are learning spaces that, when well used, allow the construction not only of scientific concepts by the individuals but also of abilities and capabilities that are important for the social and cultural interactiond of each one and of the group. With the development of these practical workshops, it was possible to analyse the use of the Robot as the mediator element of the teaching-learning process and the contributions that the use of robotics may bring to teaching since elementary levels
Resumo:
We propose a new paradigm for collective learning in multi-agent systems (MAS) as a solution to the problem in which several agents acting over the same environment must learn how to perform tasks, simultaneously, based on feedbacks given by each one of the other agents. We introduce the proposed paradigm in the form of a reinforcement learning algorithm, nominating it as reinforcement learning with influence values. While learning by rewards, each agent evaluates the relation between the current state and/or action executed at this state (actual believe) together with the reward obtained after all agents that are interacting perform their actions. The reward is a result of the interference of others. The agent considers the opinions of all its colleagues in order to attempt to change the values of its states and/or actions. The idea is that the system, as a whole, must reach an equilibrium, where all agents get satisfied with the obtained results. This means that the values of the state/actions pairs match the reward obtained by each agent. This dynamical way of setting the values for states and/or actions makes this new reinforcement learning paradigm the first to include, naturally, the fact that the presence of other agents in the environment turns it a dynamical model. As a direct result, we implicitly include the internal state, the actions and the rewards obtained by all the other agents in the internal state of each agent. This makes our proposal the first complete solution to the conceptual problem that rises when applying reinforcement learning in multi-agent systems, which is caused by the difference existent between the environment and agent models. With basis on the proposed model, we create the IVQ-learning algorithm that is exhaustive tested in repetitive games with two, three and four agents and in stochastic games that need cooperation and in games that need collaboration. This algorithm shows to be a good option for obtaining solutions that guarantee convergence to the Nash optimum equilibrium in cooperative problems. Experiments performed clear shows that the proposed paradigm is theoretical and experimentally superior to the traditional approaches. Yet, with the creation of this new paradigm the set of reinforcement learning applications in MAS grows up. That is, besides the possibility of applying the algorithm in traditional learning problems in MAS, as for example coordination of tasks in multi-robot systems, it is possible to apply reinforcement learning in problems that are essentially collaborative
Resumo:
The area of the hospital automation has been the subject a lot of research, addressing relevant issues which can be automated, such as: management and control (electronic medical records, scheduling appointments, hospitalization, among others); communication (tracking patients, staff and materials), development of medical, hospital and laboratory equipment; monitoring (patients, staff and materials); and aid to medical diagnosis (according to each speciality). This thesis presents an architecture for a patient monitoring and alert systems. This architecture is based on intelligent systems techniques and is applied in hospital automation, specifically in the Intensive Care Unit (ICU) for the patient monitoring in hospital environment. The main goal of this architecture is to transform the multiparameter monitor data into useful information, through the knowledge of specialists and normal parameters of vital signs based on fuzzy logic that allows to extract information about the clinical condition of ICU patients and give a pre-diagnosis. Finally, alerts are dispatched to medical professionals in case any abnormality is found during monitoring. After the validation of the architecture, the fuzzy logic inferences were applied to the trainning and validation of an Artificial Neural Network for classification of the cases that were validated a priori with the fuzzy system
Resumo:
The greater part of monitoring onshore Oil and Gas environment currently are based on wireless solutions. However, these solutions have a technological configuration that are out-of-date, mainly because analog radios and inefficient communication topologies are used. On the other hand, solutions based in digital radios can provide more efficient solutions related to energy consumption, security and fault tolerance. Thus, this paper evaluated if the Wireless Sensor Network, communication technology based on digital radios, are adequate to monitoring Oil and Gas onshore wells. Percent of packets transmitted with successful, energy consumption, communication delay and routing techniques applied to a mesh topology will be used as metrics to validate the proposal in the different routing techniques through network simulation tool NS-2
Resumo:
The use of Geographic Information Systems (GIS) has becoming very important in fields where detailed and precise study of earth surface features is required. Applications in environmental protection are such an example that requires the use of GIS tools for analysis and decision by managers and enrolled community of protected areas. In this specific field, a challenge that remains is to build a GIS that can be dynamically fed with data, allowing researchers and other agents to recover actual and up to date information. In some cases, data is acquired in several ways and come from different sources. To solve this problem, some tools were implemented that includes a model for spatial data treatment on the Web. The research issues involved start with the feeding and processing of environmental control data collected in-loco as biotic and geological variables and finishes with the presentation of all information on theWeb. For this dynamic processing, it was developed some tools that make MapServer more flexible and dynamic, allowing data uploading by the proper users. Furthermore, it was also developed a module that uses interpolation to aiming spatial data analysis. A complex application that has validated this research is to feed the system with data coming from coral reef regions located in northeast of Brazil. The system was implemented using the best interactivity concept provided by the AJAX model and resulted in a substantial contribution for efficiently accessing information, being an essential mechanism for controlling events in the environmental monitoring
Resumo:
The development of wireless sensor networks for control and monitoring functions has created a vibrant investigation scenario, covering since communication aspects to issues related with energy efficiency. When source sensors are endowed with cameras for visual monitoring, a new scope of challenges is raised, as transmission and monitoring requirements are considerably changed. Particularly, visual sensors collect data following a directional sensing model, altering the meaning of concepts as vicinity and redundancy but allowing the differentiation of source nodes by their sensing relevancies for the application. In such context, we propose the combined use of two differentiation strategies as a novel QoS parameter, exploring the sensing relevancies of source nodes and DWT image coding. This innovative approach supports a new scope of optimizations to improve the performance of visual sensor networks at the cost of a small reduction on the overall monitoring quality of the application. Besides definition of a new concept of relevance and the proposition of mechanisms to support its practical exploitation, we propose five different optimizations in the way images are transmitted in wireless visual sensor networks, aiming at energy saving, transmission with low delay and error recovery. Putting all these together, the proposed innovative differentiation strategies and the related optimizations open a relevant research trend, where the application monitoring requirements are used to guide a more efficient operation of sensor networks
Resumo:
Este trabalho apresenta um levantamento dos problemas associados à influência da observabilidade e da visualização radial no projeto de sistemas de monitoramento para redes de grande magnitude e complexidade. Além disso, se propõe a apresentar soluções para parte desses problemas. Através da utilização da Teoria de Redes Complexas, são abordadas duas questões: (i) a localização e a quantidade de nós necessários para garantir uma aquisição de dados capaz de representar o estado da rede de forma efetiva e (ii) a elaboração de um modelo de visualização das informações da rede capaz de ampliar a capacidade de inferência e de entendimento de suas propriedades. A tese estabelece limites teóricos a estas questões e apresenta um estudo sobre a complexidade do monitoramento eficaz, eficiente e escalável de redes
Resumo:
The monitoring of patients performed in hospitals is usually done either in a manual or semiautomated way, where the members of the healthcare team must constantly visit the patients to ascertain the health condition in which they are. The adoption of this procedure, however, compromises the quality of the monitoring conducted since the shortage of physical and human resources in hospitals tends to overwhelm members of the healthcare team, preventing them from moving to patients with adequate frequency. Given this, many existing works in the literature specify alternatives aimed at improving this monitoring through the use of wireless networks. In these works, the network is only intended for data traffic generated by medical sensors and there is no possibility of it being allocated for the transmission of data from applications present in existing user stations in the hospital. However, in the case of hospital automation environments, this aspect is a negative point, considering that the data generated in such applications can be directly related to the patient monitoring conducted. Thus, this thesis defines Wi-Bio as a communication protocol aimed at the establishment of IEEE 802.11 networks for patient monitoring, capable of enabling the harmonious coexistence among the traffic generated by medical sensors and user stations. The formal specification and verification of Wi-Bio were made through the design and analysis of Petri net models. Its validation was performed through simulations with the Network Simulator 2 (NS2) tool. The simulations of NS2 were designed to portray a real patient monitoring environment corresponding to a floor of the nursing wards sector of the University Hospital Onofre Lopes (HUOL), located at Natal, Rio Grande do Norte. Moreover, in order to verify the feasibility of Wi-Bio in terms of wireless networks standards prevailing in the market, the testing scenario was also simulated under a perspective in which the network elements used the HCCA access mechanism described in the IEEE 802.11e amendment. The results confirmed the validity of the designed Petri nets and showed that Wi-Bio, in addition to presenting a superior performance compared to HCCA on most items analyzed, was also able to promote efficient integration between the data generated by medical sensors and user applications on the same wireless network
Resumo:
Hospital Automation is an area that is constantly growing. The emergency of new technologies and hardware is transforming the processes more efficient. Nevertheless, some of the hospital processes are still being performed manually, such as monitoring of patients that is considered critical because it involves human lives. One of the factors that should be taken into account during a monitoring is the agility to detect any abnormality in vital signs of patients, as well as warning of this anomaly to the medical team involved. So, this master's thesis aims to develop an architecture to automate this process of monitoring and reporting of possible alert to a professional, so that emergency care can be done effectively. The computing mobile was used to improve the communication by distributing messages between a central located into the hospital and the mobile carried by the duty
Resumo:
One of the most important goals of bioinformatics is the ability to identify genes in uncharacterized DNA sequences on world wide database. Gene expression on prokaryotes initiates when the RNA-polymerase enzyme interacts with DNA regions called promoters. In these regions are located the main regulatory elements of the transcription process. Despite the improvement of in vitro techniques for molecular biology analysis, characterizing and identifying a great number of promoters on a genome is a complex task. Nevertheless, the main drawback is the absence of a large set of promoters to identify conserved patterns among the species. Hence, a in silico method to predict them on any species is a challenge. Improved promoter prediction methods can be one step towards developing more reliable ab initio gene prediction methods. In this work, we present an empirical comparison of Machine Learning (ML) techniques such as Na¨ýve Bayes, Decision Trees, Support Vector Machines and Neural Networks, Voted Perceptron, PART, k-NN and and ensemble approaches (Bagging and Boosting) to the task of predicting Bacillus subtilis. In order to do so, we first built two data set of promoter and nonpromoter sequences for B. subtilis and a hybrid one. In order to evaluate of ML methods a cross-validation procedure is applied. Good results were obtained with methods of ML like SVM and Naïve Bayes using B. subtilis. However, we have not reached good results on hybrid database
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
In multi-robot systems, both control architecture and work strategy represent a challenge for researchers. It is important to have a robust architecture that can be easily adapted to requirement changes. It is also important that work strategy allows robots to complete tasks efficiently, considering that robots interact directly in environments with humans. In this context, this work explores two approaches for robot soccer team coordination for cooperative tasks development. Both approaches are based on a combination of imitation learning and reinforcement learning. Thus, in the first approach was developed a control architecture, a fuzzy inference engine for recognizing situations in robot soccer games, a software for narration of robot soccer games based on the inference engine and the implementation of learning by imitation from observation and analysis of others robotic teams. Moreover, state abstraction was efficiently implemented in reinforcement learning applied to the robot soccer standard problem. Finally, reinforcement learning was implemented in a form where actions are explored only in some states (for example, states where an specialist robot system used them) differently to the traditional form, where actions have to be tested in all states. In the second approach reinforcement learning was implemented with function approximation, for which an algorithm called RBF-Sarsa($lambda$) was created. In both approaches batch reinforcement learning algorithms were implemented and imitation learning was used as a seed for reinforcement learning. Moreover, learning from robotic teams controlled by humans was explored. The proposal in this work had revealed efficient in the robot soccer standard problem and, when implemented in other robotics systems, they will allow that these robotics systems can efficiently and effectively develop assigned tasks. These approaches will give high adaptation capabilities to requirements and environment changes.
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
Este trabalho foi realizado na UHE Mogi-Guaçu, pertencente à AES Tietê S.A. Levantamento de plantas aquáticas e amostragem de água e sedimento foram realizados em julho de 2001 (estação seca), novembro de 2001 (início da estação chuvosa) e março de 2002 (final da estação chuvosa). Foram feitas 846 análises de água e 516 de sedimento, avaliando-se 47 características da água e 41 do sedimento. Espécies marginais e flutuantes foram as principais infestantes do reservatório, merecendo destaque as espécies Brachiaria subquadripara, Eichornia crassipes, Polygonum lapathifolium, Panicum rivulare, Salvinia auriculata e Pistia stratiotes. A turbidez e conseqüente baixa transmissão de luz pela coluna d'água impossibilitaram o desenvolvimento de plantas submersas. Teores de fósforo e nitrogênio encontrados tanto na água quanto no sedimento mostraram-se suficientes para suportar o crescimento de plantas aquáticas. A ocorrência de plantas marginais e flutuantes estava associada ao intenso processo de sedimentação observado no reservatório.
Resumo:
The application of composite materials and in particular the fiber-reinforced plastics (FRP) has gradually conquered space from the so called conventional materials. However, challenges have arisen when their application occurs in equipment and mechanical structures which will be exposed to harsh environmental conditions, especially when there is the influence of environmental degradation due to temperature, UV radiation and moisture in the mechanical performance of these structures, causing irreversible structural damage such as loss of dimensional stability, interfacial degradation, loss of mass, loss of structural properties and changes in the damage mechanism. In this context, the objective of this thesis is the development of a process for monitoring and modeling structural degradation, and the study of the physical and mechanical properties in FRP when in the presence of adverse environmental conditions (ageing). The mechanism of ageing is characterized by controlled environmental conditions of heated steam and ultraviolet radiation. For the research, it was necessary to develop three polymer composites. The first was a lamina of polyester resin reinforced with a short glass-E fiber mat (representing the layer exposed to ageing), and the other two were laminates, both of seven layers of reinforcement, one being made up only of short fibers of glass-E, and the other a hybrid type reinforced with fibers of glass-E/ fibers of curaua. It should be noted that the two laminates have the lamina of short glass-E fibers as a layer of the ageing process incidence. The specimens were removed from the composites mentioned and submitted to environmental ageing accelerated by an ageing chamber. To study the monitoring and modeling of degradation, the ageing cycles to which the lamina was exposed were: alternating cycles of UV radiation and heated steam, a cycle only of UV radiation and a cycle only of heated steam, for a period defined by norm. The laminates have already undergone only the alternating cycle of UV and heated steam. At the end of the exposure period the specimens were subjected to a structural stability assessment by means of the developed measurement of thickness variation technique (MTVT) and the measurement of mass variation technique (MMVT). Then they were subjected to the mechanical tests of uniaxial tension for the lamina and all the laminates, besides the bending test on three points for the laminates. This study was followed by characterization of the fracture and the surface degradation. Finally, a model was developed for the composites called Ageing Zone Diagram (AZD) for monitoring and predicting the tensile strength after the ageing processes. From the results it was observed that the process of degradation occurs Abstract Raimundo Nonato Barbosa Felipe xiv differently for each composite studied, although all were affected in certain way and that the most aggressive ageing process was that of UV radiation, and that the hybrid laminated fibers of glass-E/curaua composite was most affected in its mechanical properties