917 resultados para STACKING-FAULTS
Resumo:
The question of participation has been debated in Brazil since the 1980 decade in search a better way to take care of poulation s demand. More specificaly after the democratic open (1985) begins to be thought ways to make population participates of decisions related to alocation of public resources. The characteristic of participates actualy doesn t exist, population to be carried through is, at top, consulted, and the fact population participates stays restrict to some technics interests at the projects, mainly of public politics of local development. Observe that this implementation happens through a process and that has its limits (pass) that could be surpassed through strategies made to that. This dissertation shows results of a research about participative practices in city of Serrinha between 1997 and 2004, showing through a study of the case of Serrinha what was the process used to carry through these pratices in a moment and local considered model of this application. The analyses were developed through a model of research elaborated by the author based on large literature respects the ideal process to implant a participative public politics. The present research had a qualitative boarding, being explorative and descritive nature. The researcher (author of this dissertation) carried through all the research phases, including the transcriptions of interviews that were recorded with a digital voice recorder. Before the analysis of these data was verified that despite the public manager (former-mayor) had had a real interest in implant a process of local development in city, he was not able to forsee the correct process to do it. Two high faults were made. The first was the intention to have as tool a development plan, what locked up to make this plan was the booster of supossed participative pratice and no the ideal model that would be a plan generate by popular initiative. The second one was absence of a critical education project for the population that should be the fisrt step to carry through a politc like that
Resumo:
This master dissertation presents the development of a fault detection and isolation system based in neural network. The system is composed of two parts: an identification subsystem and a classification subsystem. Both of the subsystems use neural network techniques with multilayer perceptron training algorithm. Two approaches for identifica-tion stage were analyzed. The fault classifier uses only residue signals from the identification subsystem. To validate the proposal we have done simulation and real experiments in a level system with two water reservoirs. Several faults were generated above this plant and the proposed fault detection system presented very acceptable behavior. In the end of this work we highlight the main difficulties found in real tests that do not exist when it works only with simulation environments
Resumo:
At present, the electricity generation through wind energy has an importance growing in the world, with the existence of very large plans for future wind power installation worldwide. Thus, the increasing the electricity generation through wind power requires, more and more, analysis of studies of interaction between wind parks and electric power systems. This paper has as purposes to implement equivalent models for synchronous wind generators to represent a wind park in ATP program and to check behavior of the models through simulations. Simulations with applications of faults were achieved to evaluate the behavior of voltages of system for each equivalent model, through comparisons between the results of models proposed, to verify if the differences obtained allows the adoption of the simplest model
Resumo:
Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments
Resumo:
The industries are getting more and more rigorous, when security is in question, no matter is to avoid financial damages due to accidents and low productivity, or when it s related to the environment protection. It was thinking about great world accidents around the world involving aircrafts and industrial process (nuclear, petrochemical and so on) that we decided to invest in systems that could detect fault and diagnosis (FDD) them. The FDD systems can avoid eventual fault helping man on the maintenance and exchange of defective equipments. Nowadays, the issues that involve detection, isolation, diagnose and the controlling of tolerance fault are gathering strength in the academic and industrial environment. It is based on this fact, in this work, we discuss the importance of techniques that can assist in the development of systems for Fault Detection and Diagnosis (FDD) and propose a hybrid method for FDD in dynamic systems. We present a brief history to contextualize the techniques used in working environments. The detection of fault in the proposed system is based on state observers in conjunction with other statistical techniques. The principal idea is to use the observer himself, in addition to serving as an analytical redundancy, in allowing the creation of a residue. This residue is used in FDD. A signature database assists in the identification of system faults, which based on the signatures derived from trend analysis of the residue signal and its difference, performs the classification of the faults based purely on a decision tree. This FDD system is tested and validated in two plants: a simulated plant with coupled tanks and didactic plant with industrial instrumentation. All collected results of those tests will be discussed
Resumo:
This work presents a packet manipulation tool developed to realize tests in industrial devices that implements TCP/IP-based communication protocols. The tool was developed in Python programming language, as a Scapy extension. This tool, named IndPM- Industrial Packet Manipulator, can realize vulnerability tests in devices of industrial networks, industrial protocol compliance tests, receive server replies and utilize the Python interpreter to build tests. The Modbus/TCP protocol was implemented as proof-of-concept. The DNP3 over TCP protocol was also implemented but tests could not be realized because of the lack of resources. The IndPM results with Modbus/TCP protocol show some implementation faults in a Programmable Logic Controller communication module frequently utilized in automation companies
Resumo:
This work presents a diagnosis faults system (rotor, stator, and contamination) of three-phase induction motor through equivalent circuit parameters and using techniques patterns recognition. The technology fault diagnostics in engines are evolving and becoming increasingly important in the field of electrical machinery. The neural networks have the ability to classify non-linear relationships between signals through the patterns identification of signals related. It is carried out induction motor´s simulations through the program Matlab R & Simulink R , and produced some faults from modifications in the equivalent circuit parameters. A system is implemented with multiples classifying neural network two neural networks to receive these results and, after well-trained, to accomplish the identification of fault´s pattern
Resumo:
Nanocellulose is the crystalline domains obtained from renewable cellulosic sources, used to increase mechanical properties and biodegrability in polymer composites. This work has been to study how high pressure defibrillation and chemical purification affect the PALF fibre morphology from micro to nanoscale. Microscopy techniques and X-ray diffraction were used to study the structure and properties of the prepared nanofibers and composites. Microscopy studies showed that the used individualization processes lead to a unique morphology of interconnected web-like structure of PALF fibers. The produced nanofibers were bundles of cellulose fibers of widths ranging between 5 and 15 nm and estimated lengths of several micrometers. Percentage yield and aspect ratio of the nanofiber obtained by this technique is found to be very high in comparison with other conventional methods. The nanocomposites were prepared by means of compression moulding, by stacking the nanocellulose fibre mats between polyurethane films. The results showed that the nanofibrils reinforced the polyurethane efficiently. The addition of 5 wt% of cellulose nanofibrils to PU increased the strength nearly 300% and the stiffness by 2600%. The developed composites were utilized to fabricate various versatile medical implants. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In a real process, all used resources, whether physical or developed in software, are subject to interruptions or operational commitments. However, in situations in which operate critical systems, any kind of problem may bring big consequences. Knowing this, this paper aims to develop a system capable to detect the presence and indicate the types of failures that may occur in a process. For implementing and testing the proposed methodology, a coupled tank system was used as a study model case. The system should be developed to generate a set of signals that notify the process operator and that may be post-processed, enabling changes in control strategy or control parameters. Due to the damage risks involved with sensors, actuators and amplifiers of the real plant, the data set of the faults will be computationally generated and the results collected from numerical simulations of the process model. The system will be composed by structures with Artificial Neural Networks, trained in offline mode using Matlab®
Resumo:
Induction motors are one of the most important equipment of modern industry. However, in many situations, are subject to inadequate conditions as high temperatures and pressures, load variations and constant vibrations, for example. Such conditions, leaving them more susceptible to failures, either external or internal in nature, unwanted in the industrial process. In this context, predictive maintenance plays an important role, where the detection and diagnosis of faults in a timely manner enables the increase of time of the engine and the possibiity of reducing costs, caused mainly by stopping the production and corrective maintenance the motor itself. In this juncture, this work proposes the design of a system that is able to detect and diagnose faults in induction motors, from the collection of electrical line voltage and current, and also the measurement of engine speed. This information will use as input to a fuzzy inference system based on rules that find and classify a failure from the variation of thess quantities
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
This work consists of the creation of a Specialist System which utilizes production rules to detect inadequacies in the command circuits of an operation system and commands of electric engines known as Direct Start. Jointly, three other modules are developed: one for the simulation of the commands diagram, one for the simulation of faults and another one for the correction of defects in the diagram, with the objective of making it possible to train the professionals aiming a better qualification for the operation and maintenance. The development is carried through in such a way that the structure of the task allows the extending of the system and a succeeding promotion of other bigger and more complex typical systems. The computational environment LabView is employed to enable the system
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This work proposes a computer simulator for sucker rod pumped vertical wells. The simulator is able to represent the dynamic behavior of the systems and the computation of several important parameters, allowing the easy visualization of several pertinent phenomena. The use of the simulator allows the execution of several tests at lower costs and shorter times, than real wells experiments. The simulation uses a model based on the dynamic behavior of the rod string. This dynamic model is represented by a second order partial differencial equation. Through this model, several common field situations can be verified. Moreover, the simulation includes 3D animations, facilitating the physical understanding of the process, due to a better visual interpretation of the phenomena. Another important characteristic is the emulation of the main sensors used in sucker rod pumping automation. The emulation of the sensors is implemented through a microcontrolled interface between the simulator and the industrial controllers. By means of this interface, the controllers interpret the simulator as a real well. A "fault module" was included in the simulator. This module incorporates the six more important faults found in sucker rod pumping. Therefore, the analysis and verification of these problems through the simulator, allows the user to identify such situations that otherwise could be observed only in the field. The simulation of these faults receives a different treatment due to the different boundary conditions imposed to the numeric solution of the problem. Possible applications of the simulator are: the design and analysis of wells, training of technicians and engineers, execution of tests in controllers and supervisory systems, and validation of control algorithms
Resumo:
There is a growing need to develop new tools to help end users in tasks related to the design, monitoring, maintenance and commissioning of critical infrastructures. The complexity of the industrial environment, for example, requires that these tools have flexible features in order to provide valuable data for the designers at the design phases. Furthermore, it is known that industrial processes have stringent requirements for dependability, since failures can cause economic losses, environmental damages and danger to people. The lack of tools that enable the evaluation of faults in critical infrastructures could mitigate these problems. Accordingly, the said work presents developing a framework for analyzing of dependability for critical infrastructures. The proposal allows the modeling of critical infrastructure, mapping its components to a Fault Tree. Then the mathematical model generated is used for dependability analysis of infrastructure, relying on the equipment and its interconnections failures. Finally, typical scenarios of industrial environments are used to validate the proposal