90 resultados para Árvore de falha
Resumo:
The nurses in the hemodialysis has an important role in the nursing process implementation, in the context of a theoretical referential. Among the nursing theories, highlights the Roy´s adaptations model, who considers a person as an holistic adaptive system that aims to adapt customers to different living conditions. Thus, it is believed that the Roy´s nursing process will guide nursing care to patients on dialysis. Therefore, the study aimed to analyze the nursing diagnosis present in patients with chronic kidney disease on hemodialysis based on the theoretical model of Roy and NANDA-International. Descriptive and cros-sectional study, performed at a dialysis center in a city in northeastern Brazil. Sample of 178 patients and consecutive sampling by convenience. Data collection ocurred from October/2011 until February/2012, through interview and physical examination forms. Data analysis was initiated by clinical reasoning, diagnosis judgment and similarity relation. Then, the data were entered into SPSS program, 16.0 version, generating descriptive statistics. The project was approved by the Ethics Research Committee (protocol nº 115/11) with a Presentation Certificate for Ethics Appreciation (in 0139.0.051.000-111) and was funded by the Universal edict MCT / CNPq 14/2010. The results revealed that most patients were male (52.2%), married (62.9%) and residents in the Natal´s metropolitan region (54.5%). The mean age was 46.6 years and the years of study, 8,5. Regarding nursing diagnosis obtained an average of 6.6, especially: Risk of Infection (100%), excessive fluid volume (99.4%) and hypothermia (61.8%). On the other hand the adaptive problems average was 6.4, and the most common: intracellular fluid retention (99.4%); Hyperkalemia (64.6%); Hypothermia (61.8%) and edema (53.9%). Were established 20 similarity relations between the NANDA-International nursing diagnosis and adaptive problems of Roy, namely: risk of falls / injury risk and potential for injury, impaired physical mobility and walking mobility and / or restricted coordination, dressing self-care deficit and loss of self-care ability; hypothermia and hypothermia; impaired skin integrity and impaired skin integrity; excessive fluid volume and intracellular fluid retention / Hyperkalemia / Hypocalcemia / edema; imbalanced nutrition: less than body requirements and Nutrition less than the body's needs; constipation and constipation, acute pain and acute pain, chronic pain and chronic pain, sensorial perception disturbed: visual, tactile and auditory disabilities and a primary sense: sight, hearing and tactile; sleep deprivation and insomnia, fatigue and intolerance to activities; ineffective self health and fails in the role; sexual dysfunction and sexual dysfunction; situational low self-esteem and low self-esteem, and diarrhea and diarrhea. We conclude that there is similarity between the typologies and was required a model´s analysis, because they present different ways to establish the nursing diagnosis. Moreover, the nursing process use, under the context of a theory and a classification system, subsidizes the care and contributes to the strengthening of nursing science
Resumo:
This master´s thesis presents a reliability study conducted among onshore oil fields in the Potiguar Basin (RN/CE) of Petrobras company, Brazil. The main study objective was to build a regression model to predict the risk of failures that impede production wells to function properly using the information of explanatory variables related to wells such as the elevation method, the amount of water produced in the well (BSW), the ratio gas-oil (RGO), the depth of the production bomb, the operational unit of the oil field, among others. The study was based on a retrospective sample of 603 oil columns from all that were functioning between 2000 and 2006. Statistical hypothesis tests under a Weibull regression model fitted to the failure data allowed the selection of some significant predictors in the set considered to explain the first failure time in the wells
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time
Resumo:
Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments
Resumo:
T'his dissertation proposes alternative models to allow the interconnectioin of the data communication networks of COSERN Companhia Energética do Rio Grande do Norte. These networks comprise the oorporative data network, based on TCP/IP architecture, and the automation system linking remote electric energy distribution substations to the main Operatin Centre, based on digital radio links and using the IEC 60870-5-101 protoco1s. The envisaged interconnection aims to provide automation data originated from substations with a contingent route to the Operation Center, in moments of failure or maintenance of the digital radio links. Among the presented models, the one chosen for development consists of a computational prototype based on a standard personal computer, working under LINUX operational system and running na application, developesd in C language, wich functions as a Gateway between the protocols of the TCP/IP stack and the IEC 60870-5-101 suite. So, it is described this model analysis, implementation and tests of functionality and performance. During the test phase it was basically verified the delay introduced by the TCP/IP network when transporting automation data, in order to guarantee that it was cionsistent with the time periods present on the automation network. Besides , additional modules are suggested to the prototype, in order to handle other issues such as security and prioriz\ation of the automation system data, whenever they are travesing the TCP/IP network. Finally, a study hás been done aiming to integrate, in more complete way, the two considered networks. It uses IP platform as a solution of convergence to the communication subsystem of na unified network, as the most recente market tendencies for supervisory and other automation systems indicate
Resumo:
This work presents a model of bearingless induction machine with divided winding. The main goal is to obtain a machine model to use a simpler control system as used in conventional induction machine and to know its behavior. The same strategies used in conventional machines were used to reach the bearingless induction machine model, which has made possible an easier treatment of the involved parameters. The studied machine is adapted from the conventional induction machine, the stator windings were divided and all terminals had been available. This method does not need an auxiliary stator winding for the radial position control which results in a more compact machine. Another issue about this machine is the variation of inductances array also present in result of the rotor displacement. The changeable air-gap produces variation in magnetic flux and in inductances consequently. The conventional machine model can be used for the bearingless machine when the rotor is centered, but in rotor displacement condition this model is not applicable. The bearingless machine has two sets of motor-bearing, both sets with four poles. It was constructed in horizontal position and this increases difficulty in implementation. The used rotor has peculiar characteristics; it is projected according to the stator to yield the greatest torque and force possible. It is important to observe that the current unbalance generated by the position control does not modify the machine characteristics, this only occurs due the radial rotor displacement. The obtained results validate the work; the data reached by a supervisory system corresponds the foreseen results of simulation which verify the model veracity
Resumo:
The semiconductor technologies evolutions leads devices to be developed with higher processing capability. Thus, those components have been used widely in more fields. Many industrial environment such as: oils, mines, automotives and hospitals are frequently using those devices on theirs process. Those industries activities are direct related to environment and health safe. So, it is quite important that those systems have extra safe features yield more reliability, safe and availability. The reference model eOSI that will be presented by this work is aimed to allow the development of systems under a new view perspective which can improve and make simpler the choice of strategies for fault tolerant. As a way to validate the model na architecture FPGA-based was developed.
Resumo:
Global Positioning System, or simply GPS, it is a radionavigation system developed by United States for military applications, but it becames very useful for civilian using. In the last decades Brazil has developed sounding rockets and today many projects to build micro and nanosatellites has appeared. This kind of vehicles named spacecrafts or high dynamic vehicles, can use GPS for its autonome location and trajectories controls. Despite of a huge number of GPS receivers available for civilian applications, they cannot used in high dynamic vehicles due environmental issues (vibrations, temperatures, etc.) or imposed dynamic working limits. Only a few nations have the technology to build GPS receivers for spacecrafts or high dynamic vehicles is available and they imposes rules who difficult the access to this receivers. This project intends to build a GPS receiver, to install them in a payload of a sounding rocket and data collecting to verify its correct operation when at the flight conditions. The inner software to this receiver was available in source code and it was tested in a software development platform named GPS Architect. Many organizations cooperated to support this project: AEB, UFRN, IAE, INPE e CLBI. After many phases: defining working conditions, choice and searching electronic, the making of the printed boards, assembling and assembling tests; the receiver was installed in a VS30 sounding rocket launched at Centro de Lançamento da Barreira do Inferno in Natal/RN. Despite of the fact the locations data from the receiver were collected only the first 70 seconds of flight, this data confirms the correct operation of the receiver by the comparison between its positioning data and the the trajectory data from CLBI s tracking radar named ADOUR
Resumo:
This study aims to use a computational model that considers the statistical characteristics of the wind and the reliability characteristics of a wind turbine, such as failure rates and repair, representing the wind farm by a Markov process to determine the estimated annual energy generated, and compare it with a real case. This model can also be used in reliability studies, and provides some performance indicators that will help in analyzing the feasibility of setting up a wind farm, once the power curve is known and the availability of wind speed measurements. To validate this model, simulations were done using the database of the wind farm of Macau PETROBRAS. The results were very close to the real, thereby confirming that the model successfully reproduced the behavior of all components involved. Finally, a comparison was made of the results presented by this model, with the result of estimated annual energy considering the modeling of the distribution wind by a statistical distribution of Weibull
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
This work consists of the creation of a Specialist System which utilizes production rules to detect inadequacies in the command circuits of an operation system and commands of electric engines known as Direct Start. Jointly, three other modules are developed: one for the simulation of the commands diagram, one for the simulation of faults and another one for the correction of defects in the diagram, with the objective of making it possible to train the professionals aiming a better qualification for the operation and maintenance. The development is carried through in such a way that the structure of the task allows the extending of the system and a succeeding promotion of other bigger and more complex typical systems. The computational environment LabView is employed to enable the system
Resumo:
The stability of synchronous generators connected to power grid has been the object of study and research for years. The interest in this matter is justified by the fact that much of the electricity produced worldwide is obtained with the use of synchronous generators. In this respect, studies have been proposed using conventional and unconventional control techniques such as fuzzy logic, neural networks, and adaptive controllers to increase the stabilitymargin of the systemduring sudden failures and transient disturbances. Thismaster thesis presents a robust unconventional control strategy for maintaining the stability of power systems and regulation of output voltage of synchronous generators connected to the grid. The proposed control strategy comprises the integration of a sliding surface with a linear controller. This control structure is designed to prevent the power system losing synchronism after a sudden failure and regulation of the terminal voltage of the generator after the fault. The feasibility of the proposed control strategy was experimentally tested in a salient pole synchronous generator of 5 kVA in a laboratory structure
Resumo:
Complex network analysis is a powerful tool into research of complex systems like brain networks. This work aims to describe the topological changes in neural functional connectivity networks of neocortex and hippocampus during slow-wave sleep (SWS) in animals submited to a novel experience exposure. Slow-wave sleep is an important sleep stage where occurs reverberations of electrical activities patterns of wakeness, playing a fundamental role in memory consolidation. Although its importance there s a lack of studies that characterize the topological dynamical of functional connectivity networks during that sleep stage. There s no studies that describe the topological modifications that novel exposure leads to this networks. We have observed that several topological properties have been modified after novel exposure and this modification remains for a long time. Major part of this changes in topological properties by novel exposure are related to fault tolerance
Resumo:
Present work proposed to map and features the wear mechanisms of structural polymers of engineering derived of the sliding contact with a metallic cylindrical spindle submitted to eccentricity due to fluctuations in it is mass and geometric centers. For this it was projected and makes an experimental apparatus from balancing machine where the cylindrical counterbody was supported in two bearings and the polymeric coupon was situated in a holder with freedom of displacement along counterbody. Thus, the experimental tests were standardized using two position of the two bearings (Fixed or Free) and seven different positions along the counterbody, that permit print different conditions to the stiffness from system. Others parameters as applied normal load, sliding velocity and distance were fixed. In this investigation it was used as coupon two structural polymers of wide quotidian use, PTFE (polytetrafluroethylene) and PEEK (poly-ether-ether-ketone) and the AISI 4140 alloy steel as counterbody. Polymeric materials were characterized by thermal analysis (thermogravimetric, differential scanning calorimetry and dynamic-mechanical), hardness and rays-X diffractometry. While the metallic material was submitted at hardness, mechanical resistance tests and metallographic analysis. During the tribological tests were recorded the heating response with thermometers, yonder overall velocity vibration (VGV) and the acceleration using accelerometers. After tests the wear surface of the coupons were analyzed using a Scanning Electronic Microscopy (SEM) to morphological analysis and spectroscopy EDS to microanalysis. Moreover the roughness of the counterbody was characterized before and after the tribological tests. It was observed that the tribological response of the polymers were different in function of their distinct molecular structure. It were identified the predominant wear mechanisms in each polymer. The VGV of the PTFE was smaller than PEEK, in the condition of minimum stiffness, in function of the higher loss coefficient of that polymer. Wear rate of the PTFE was more of a magnitude order higher than PEEK. With the results was possible developed a correlation between the wear rate and parameter (E/ρ)1/2 (Young modulus, E, density, ρ), proportional at longitudinal elastic wave velocity in the material.
Resumo:
The static and cyclic assays are common to test materials in structures.. For cycling assays to assess the fatigue behavior of the material and thereby obtain the S-N curves and these are used to construct the diagrams of living constant. However, these diagrams, when constructed with small amounts of S-N curves underestimate or overestimate the actual behavior of the composite, there is increasing need for more testing to obtain more accurate results. Therewith, , a way of reducing costs is the statistical analysis of the fatigue behavior. The aim of this research was evaluate the probabilistic fatigue behavior of composite materials. The research was conducted in three parts. The first part consists of associating the equation of probability Weilbull equations commonly used in modeling of composite materials S-N curve, namely the exponential equation and power law and their generalizations. The second part was used the results obtained by the equation which best represents the S-N curves of probability and trained a network to the modular 5% failure. In the third part, we carried out a comparative study of the results obtained using the nonlinear model by parts (PNL) with the results of a modular network architecture (MN) in the analysis of fatigue behavior. For this we used a database of ten materials obtained from the literature to assess the ability of generalization of the modular network as well as its robustness. From the results it was found that the power law of probability generalized probabilistic behavior better represents the fatigue and composites that although the generalization ability of the MN that was not robust training with 5% failure rate, but for values mean the MN showed more accurate results than the PNL model