846 resultados para Engineering, Electrical
Resumo:
Safety Instrumented Systems (SIS) are designed to prevent and / or mitigate accidents, avoiding undesirable high potential risk scenarios, assuring protection of people`s health, protecting the environment and saving costs of industrial equipment. The design of these systems require formal methods for ensuring the safety requirements, but according material published in this area, has not identified a consolidated procedure to match the task. This sense, this article introduces a formal method for diagnosis and treatment of critical faults based on Bayesian network (BN) and Petri net (PN). This approach considers diagnosis and treatment for each safety instrumented function (SIF) including hazard and operability (HAZOP) study in the equipment or system under control. It also uses BN and Behavioral Petri net (BPN) for diagnoses and decision-making and the PN for the synthesis, modeling and control to be implemented by Safety Programmable Logic Controller (PLC). An application example considering the diagnosis and treatment of critical faults is presented and illustrates the methodology proposed.
Resumo:
Flow pumps have been developed for classical applications in Engineering, and are important instruments in areas such as Biology and Medicine. Among applications for this kind of device we notice blood pump and chemical reagents dosage in Bioengineering. Furthermore, they have recently emerged as a viable thermal management solution for cooling applications in small-scale electronic devices. This work presents the performance study of a novel principle of a piezoelectric flow pump which is based oil the use of a bimorph piezoelectric actuator inserted in fluid (water). Piezoelectric actuators have some advantages over classical devices, such as lower noise generation and ease of miniaturization. The main objective is the characterization of this piezoelectric pump principle through computational simulations (using finite element software), and experimental tests through a manufactured prototype. Computational data, Such as flow rate and pressure curves, have also been compared with experimental results for validation purposes. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This work presents the implementation of the ultrasonic shear reflectance method for viscosity measurement of Newtonian liquids using wave mode conversion from longitudinal to shear waves and vice versa. The method is based on the measurement of the complex reflection coefficient (magnitude and phase) at a solid-liquid interface. The implemented measurement cell is composed of an ultrasonic transducer, a water buffer, an aluminum prism, a PMMA buffer rod, and a sample chamber. Viscosity measurements were made in the range from 1 to 3.5 MHz for olive oil and for automotive oils (SAE 40, 90, and 250) at 15 and 22.5 degrees C, respectively. Moreover, olive oil and corn oil measurements were conducted in the range from 15 to 30 degrees C at 3.5 and 2.25 MHz, respectively. The ultrasonic measurements, in the case of the less viscous liquids, agree with the results provided by a rotational viscometer, showing Newtonian behavior. In the case of the more viscous liquids, a significant difference was obtained, showing a clear non-Newtonian behavior that cannot be described by the Kelvin-Voigt model.
Resumo:
Systems of distributed artificial intelligence can be powerful tools in a wide variety of practical applications. Its most surprising characteristic, the emergent behavior, is also the most answerable for the difficulty in. projecting these systems. This work proposes a tool capable to beget individual strategies for the elements of a multi-agent system and thereof providing to the group means on obtaining wanted results, working in a coordinated and cooperative manner as well. As an application example, a problem was taken as a basis where a predators` group must catch a prey in a three-dimensional continuous ambient. A synthesis of system strategies was implemented of which internal mechanism involves the integration between simulators by Particle Swarm Optimization algorithm (PSO), a Swarm Intelligence technique. The system had been tested in several simulation settings and it was capable to synthesize automatically successful hunting strategies, substantiating that the developed tool can provide, as long as it works with well-elaborated patterns, satisfactory solutions for problems of complex nature, of difficult resolution starting from analytical approaches. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Twelve samples with different grain sizes were prepared by normal grain growth and by primary recrystallization, and the hysteresis dissipated energy was measured by a quasi-static method. Results showed a linear relation between hysteresis energy loss and the inverse of grain size, which is here called Mager`s law, for maximum inductions from 0.6 to 1.5 T, and a Steinmetz power law relation between hysteresis loss and maximum induction for all samples. The combined effect is better described by a Mager`s law where the coefficients follow Steinmetz law.
Resumo:
To explain the magnetic behavior of plastic deformation of thin magnetic films (Fe and permalloy) on an elastic substrate (nitinol), it is noted that unlike in the bulk, the dislocation density does not increase dramatically because of the dimensional constraint. As a result, the resulting residual stress, even though strain hardening is limited, dominates the observed magnetic behavior. Thus, with the field parallel to the stress axis, the compressive residual stress resulting from plastic deformation causes a decrease in remanence and an increase in coercivity; and with the field perpendicular to the stress axis, the resulting compressive residual stress causes an increase in remanence and a decrease in coercivity. These elements have been inserted into the model previously developed for plastic deformation in the bulk, producing the aforementioned behavior, which has been observed experimentally in the films.
Resumo:
Before one models the effect of plastic deformation on magnetoacoustic emission (MAE), one must first treat non-180 degrees domain wall motion. In this paper, we take the Alessandro-Beatrice-Bertotti-Montorsi (ABBM) model and modify it to treat non-180 degrees wall motion. We then insert a modified stress-dependent Jiles-Atherton model, which treats plastic deformation, into the modified ABBM model to treat MAE and magnetic Barkhausen noise (HBN). In fitting the dependence of these quantities on plastic deformation, we apply a model for when deformation gets into the stage where dislocation tangles are formed, noting two chief effects, one due to increased density of emission centers owing to increased dislocation density, and the other due to a more gentle increase in the residual stress in the vicinity of the dislocation tangles as deformation is increased.
Resumo:
This paper presents the results of the in-depth study of the Barkhausen effect signal properties for the plastically deformed Fe-2%Si samples. The investigated samples have been deformed by cold rolling up to plastic strain epsilon(p) = 8%. The first approach consisted of time-domain-resolved pulse and frequency analysis of the Barkhausen noise signals whereas the complementary study consisted of the time-resolved pulse count analysis as well as a total pulse count. The latter included determination of time distribution of pulses for different threshold voltage levels as well as the total pulse count as a function of both the amplitude and the duration time of the pulses. The obtained results suggest that the observed increase in the Barkhausen noise signal intensity as a function of deformation level is mainly due to the increase in the number of bigger pulses.
Resumo:
Ti(6)Al(4)V thin films were grown by magnetron sputtering on a conventional austenitic stainless steel. Five deposition conditions varying both the deposition chamber pressure and the plasma power were studied. Highly textured thin films were obtained, their crystallite size (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The main scope of this work is the implementation of an MPC that integrates the control and the economic optimization of the system. The two problems are solved simultaneously through the modification of the control cost function that includes an additional term related to the economic objective. The optimizing MPC is based on a quadratic program (QP) as the conventional MPC and can be solved with the available QP solvers. The method was implemented in an industrial distillation system, and the results show that the approach is efficient and can be used, in several practical cases. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A model predictive controller (MPC) is proposed, which is robustly stable for some classes of model uncertainty and to unknown disturbances. It is considered as the case of open-loop stable systems, where only the inputs and controlled outputs are measured. It is assumed that the controller will work in a scenario where target tracking is also required. Here, it is extended to the nominal infinite horizon MPC with output feedback. The method considers an extended cost function that can be made globally convergent for any finite input horizon considered for the uncertain system. The method is based on the explicit inclusion of cost contracting constraints in the control problem. The controller considers the output feedback case through a non-minimal state-space model that is built using past output measurements and past input increments. The application of the robust output feedback MPC is illustrated through the simulation of a low-order multivariable system.
Resumo:
This work presents an alternative way to formulate the stable Model Predictive Control (MPC) optimization problem that allows the enlargement of the domain of attraction, while preserving the controller performance. Based on the dual MPC that uses the null local controller, it proposed the inclusion of an appropriate set of slacked terminal constraints into the control problem. As a result, the domain of attraction is unlimited for the stable modes of the system, and the largest possible for the non-stable modes. Although this controller does not achieve local optimality, simulations show that the input and output performances may be comparable to the ones obtained with the dual MPC that uses the LQR as a local controller. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.