432 resultados para Robustez
Resumo:
The study aims identify the existence of quality culture in Brazilian automotive dealerships with ISO 9001, motivated by this research problem: exist a quality culture in this dealerships, which facilitates the adoption of quality practices supported by ISO 9001? For referencing, the theoretical review was written in five themes: organizational culture, quality culture, total quality management, ISO 9001 quality management system and the Brazilian automobile industry. As regards the methodological aspects, the research has an applied nature, with a quantitative approach, being exploratory in their objectives, and bibliographic, documental and survey as technical procedures. The organizations participating in the study were all Brazilian automotive dealerships certified with ISO 9001. The research intended cover all the 80 active dealers with ISO 9001 certification identified by the Brazilian Committee for Quality (ABNT CB-25). The survey recorded participation of 32 companies (response rate 40%). The questionnaire was sent to seller managers, formatted into five sections: 1) introductory message 2) manager profile, 3) reasons for implementation and benefits generated by ISO 4) adoption levels of quality practices and 5) diagnosis of organizational culture. The questions contained in sections 2 and 3 were structured in multiple choice, and in the remaining sections were structured in Likert 5-point scale. The statistical method used (data analysis), was the descriptive statistics, for data representation in frequency percentage (FP) and standard level (SL). The results showed that the interviewed dealerships have an organizational culture with very high levels of prevalence in "outcome orientation" and "attention to detail" cultural dimensions. In addition, about the other two dimensions considered conducive to quality (innovation and teamwork/respect for people), both observed high prevalence. Based on the present results, concluded that the organizational culture of Brazilian dealerships with ISO 9001 are quality oriented, being conducive to adoption of quality practices supported by TQM Systems. However, it is important to mention that the quality culture identified is not sufficiently developed to adopt quality practices at optimal levels, which sets up an unfavorable scenario to deals with highly rigorous customer
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
The predictive control technique has gotten, on the last years, greater number of adepts in reason of the easiness of adjustment of its parameters, of the exceeding of its concepts for multi-input/multi-output (MIMO) systems, of nonlinear models of processes could be linearised around a operating point, so can clearly be used in the controller, and mainly, as being the only methodology that can take into consideration, during the project of the controller, the limitations of the control signals and output of the process. The time varying weighting generalized predictive control (TGPC), studied in this work, is one more an alternative to the several existing predictive controls, characterizing itself as an modification of the generalized predictive control (GPC), where it is used a reference model, calculated in accordance with parameters of project previously established by the designer, and the application of a new function criterion, that when minimized offers the best parameters to the controller. It is used technique of the genetic algorithms to minimize of the function criterion proposed and searches to demonstrate the robustness of the TGPC through the application of performance, stability and robustness criterions. To compare achieves results of the TGPC controller, the GCP and proportional, integral and derivative (PID) controllers are used, where whole the techniques applied to stable, unstable and of non-minimum phase plants. The simulated examples become fulfilled with the use of MATLAB tool. It is verified that, the alterations implemented in TGPC, allow the evidence of the efficiency of this algorithm
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
A 2.5D ray-tracing propagation model is proposed to predict radio loss in indoor environment. Specifically, we opted for the Shooting and Bouncing Rays (SBR) method, together with the Geometrieal Theory of Diffrartion (GTD). Besides the line-of-sight propagation (LOS), we consider that the radio waves may experience reflection, refraction, and diffraction (NLOS). In the Shooting and Bouncing Rays (SBR) method, the transmitter antenna launches a bundle of rays that may or may not reach the receiver. Considering the transmitting antenna as a point, the rays will start to launch from this position and can reach the receiver either directly or after reflections, refractions, diffractions, or even after any combination of the previous effects. To model the environment, a database is built to record geometrical characteristics and information on the constituent materials of the scenario. The database works independently of the simulation program, allowing robustness and flexibility to model other seenarios. Each propagation mechanism is treated separately. In line-of-sight propagation, the main contribution to the received signal comes from the direct ray, while reflected, refracted, and diffracted signal dominate when the line-of-sight is blocked. For this case, the transmitted signal reaches the receiver through more than one path, resulting in a multipath fading. The transmitting channel of a mobile system is simulated by moving either the transmitter or the receiver around the environment. The validity of the method is verified through simulations and measurements. The computed path losses are compared with the measured values at 1.8 GHz ftequency. The results were obtained for the main corridor and room classes adjacent to it. A reasonable agreement is observed. The numerical predictions are also compared with published data at 900 MHz and 2.44 GHz frequencies showing good convergence
Resumo:
This paper presents the performanee analysis of traffie retransmission algorithms pro¬posed to the HCCA medium aeeess meehanism of IEEE 802.11 e standard applied to industrial environmen1. Due to the nature of this kind of environment, whieh has eleetro¬magnetic interferenee, and the wireless medium of IEEE 802.11 standard, suseeptible to such interferenee, plus the lack of retransmission meehanisms, refers to an impraetieable situation to ensure quality of service for real-time traffic, to whieh the IEEE 802.11 e stan¬dard is proposed and this environment requires. Thus, to solve this problem, this paper proposes a new approach that involves the ereation and evaluation of retransmission al-gorithms in order to ensure a levei of robustness, reliability and quality of serviee to the wireless communication in such environments. Thus, according to this approaeh, if there is a transmission error, the traffie scheduler is able to manage retransmissions to reeo¬ver data 10s1. The evaluation of the proposed approaeh is performed through simulations, where the retransmission algorithms are applied to different seenarios, whieh are abstrae¬tions of an industrial environment, and the results are obtained by using an own-developed network simulator and compared with eaeh other to assess whieh of the algorithms has better performanee in a pre-defined applieation
Resumo:
Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification
Resumo:
Embedded systems are widely spread nowadays. An example is the Digital Signal Processor (DSP), which is a high processing power device. This work s contribution consist of exposing DSP implementation of the system logic for detecting leaks in real time. Among the various methods of leak detection available today this work uses a technique based on the pipe pressure analysis and usesWavelet Transform and Neural Networks. In this context, the DSP, in addition to do the pressure signal digital processing, also communicates to a Global Positioning System (GPS), which helps in situating the leak, and to a SCADA, sharing information. To ensure robustness and reliability in communication between DSP and SCADA the Modbus protocol is used. As it is a real time application, special attention is given to the response time of each of the tasks performed by the DSP. Tests and leak simulations were performed using the structure of Laboratory of Evaluation of Measurement in Oil (LAMP), at Federal University of Rio Grande do Norte (UFRN)
Resumo:
This paper describes the design, implementation and enforcement of a system for industrial process control based on fuzzy logic and developed using Java, with support for industrial communication protocol through the OPC (Ole for Process Control). Besides the java framework, the software is completely independent from other platforms. It provides friendly and functional tools for modeling, construction and editing of complex fuzzy inference systems, and uses these logical systems in control of a wide variety of industrial processes. The main requirements of the developed system should be flexibility, robustness, reliability and ease of expansion
Resumo:
The Methods for compensation of harmonic currents and voltages have been widely used since these methods allow to reduce to acceptable levels the harmonic distortion in the voltages or currents in a power system, and also compensate reactive. The reduction of harmonics and reactive contributes to the reduction of losses in transmission lines and electrical machinery, increasing the power factor, reduce the occurrence of overvoltage and overcurrent. The active power filter is the most efficient method for compensation of harmonic currents and voltages. The active power filter is necessary to use current and voltage controllers loop. Conventionally, the current and voltage control loop of active filter has been done by proportional controllers integrative. This work, investigated the use of a robust adaptive control technique on the shunt active power filter current and voltage control loop to increase robustness and improve the performance of active filter to compensate for harmonics. The proposed control scheme is based on a combination of techniques for adaptive control pole placement and variable structure. The advantages of the proposed method over conventional ones are: lower total harmonic distortion, more flexibility, adaptability and robustness to the system. Moreover, the proposed control scheme improves the performance and improves the transient of active filter. The validation of the proposed technique was verified initially by a simulation program implemented in C++ language and then experimental results were obtained using a prototype three-phase active filter of 1 kVA
Resumo:
Conventional control strategies used in shunt active power filters (SAPF) employs real-time instantaneous harmonic detection schemes which is usually implements with digital filters. This increase the number of current sensors on the filter structure which results in high costs. Furthermore, these detection schemes introduce time delays which can deteriorate the harmonic compensation performance. Differently from the conventional control schemes, this paper proposes a non-standard control strategy which indirectly regulates the phase currents of the power mains. The reference currents of system are generated by the dc-link voltage controller and is based on the active power balance of SAPF system. The reference currents are aligned to the phase angle of the power mains voltage vector which is obtained by using a dq phase locked loop (PLL) system. The current control strategy is implemented by an adaptive pole placement control strategy integrated to a variable structure control scheme (VS¡APPC). In the VS¡APPC, the internal model principle (IMP) of reference currents is used for achieving the zero steady state tracking error of the power system currents. This forces the phase current of the system mains to be sinusoidal with low harmonics content. Moreover, the current controllers are implemented on the stationary reference frame to avoid transformations to the mains voltage vector reference coordinates. This proposed current control strategy enhance the performance of SAPF with fast transient response and robustness to parametric uncertainties. Experimental results are showing for determining the effectiveness of SAPF proposed control system
Resumo:
In this work is proposed an indirect approach to the DualMode Adaptive Robust Controller (DMARC), combining the typicals transient and robustness properties of Variable Structure Systems, more specifically of Variable Structure Model Reference Adaptive Controller (VS-MRAC), with a smooth control signal in steady-state, typical of conventional Adaptive Controllers, as Model Reference Adaptive Controller (MRAC). The goal is to provide a more intuitive controller design, based on physical plant parameters, as resistances, inertia moments, capacitances, etc. Furthermore, with the objective to follow the evolutionary line of direct controllers, it will be proposed an indirect version for the Binary Model Reference Adaptive Controller (B-MRAC), that was the first controller attemptting to act as MRAC as well as VS-MRAC, depending on a pre-defined fixed parameter
Resumo:
This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose