29 resultados para Component-based systems
Resumo:
The power transformer is a piece of electrical equipment that needs continuous monitoring and fast protection since it is very expensive and an essential element for a power system to perform effectively. The most common protection technique used is the percentage differential logic, which provides discrimination between an internal fault and different operating conditions. Unfortunately, there are some operating conditions of power transformers that can affect the protection behavior and the power system stability. This paper proposes the development of a new algorithm to improve the differential protection performance by using fuzzy logic and Clarke`s transform. An electrical power system was modeled using Alternative Transients Program (ATP) software to obtain the operational conditions and fault situations needed to test the algorithm developed. The results were compared to a commercial relay for validation, showing the advantages of the new method.
Resumo:
Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.
Resumo:
Fault resistance is a critical component of electric power systems operation due to its stochastic nature. If not considered, this parameter may interfere in fault analysis studies. This paper presents an iterative fault analysis algorithm for unbalanced three-phase distribution systems that considers a fault resistance estimate. The proposed algorithm is composed by two sub-routines, namely the fault resistance and the bus impedance. The fault resistance sub-routine, based on local fault records, estimates the fault resistance. The bus impedance sub-routine, based on the previously estimated fault resistance, estimates the system voltages and currents. Numeric simulations on the IEEE 37-bus distribution system demonstrate the algorithm`s robustness and potential for offline applications, providing additional fault information to Distribution Operation Centers and enhancing the system restoration process. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this study, further improvements regarding the fault location problem for power distribution systems are presented. The proposed improvements relate to the capacitive effect consideration on impedance-based fault location methods, by considering an exact line segment model for the distribution line. The proposed developments, which consist of a new formulation for the fault location problem and a new algorithm that considers the line shunt admittance matrix, are presented. The proposed equations are developed for any fault type and result in one single equation for all ground fault types, and another equation for line-to-line faults. Results obtained with the proposed improvements are presented. Also, in order to compare the improvements performance and demonstrate how the line shunt admittance affects the state-of-the-art impedance-based fault location methodologies for distribution systems, the results obtained with two other existing methods are presented. Comparative results show that, in overhead distribution systems with laterals and intermediate loads, the line shunt admittance can significantly affect the state-of-the-art methodologies response, whereas in this case the proposed developments present great improvements by considering this effect.
Resumo:
Distributed control systems consist of sensors, actuators and controllers, interconnected by communication networks and are characterized by a high number of concurrent process. This work presents a proposal for a procedure to model and analyze communication networks for distributed control systems in intelligent building. The approach considered for this purpose is based on the characterization of the control system as a discrete event system and application of coloured Petri net as a formal method for specification, analysis and verification of control solutions. With this approach, we develop the models that compose the communication networks for the control systems of intelligent building, which are considered the relationships between the various buildings systems. This procedure provides a structured development of models, facilitating the process of specifying the control algorithm. An application example is presented in order to illustrate the main features of this approach.
Resumo:
The most-used refrigeration system is the vapor-compression system. In this cycle, the compressor is the most complex and expensive component, especially the reciprocating semihermetic type, which is often used in food product conservation. This component is very sensitive to variations in its operating conditions. If these conditions reach unacceptable levels, failures are practically inevitable. Therefore, maintenance actions should be taken in order to maintain good performance of such compressors and to avoid undesirable stops of the system. To achieve such a goal, one has to evaluate the reliability of the system and/or the components. In this case, reliability means the probability that some equipment cannot perform their requested functions for an established time period, under defined operating conditions. One of the tools used to improve component reliability is the failure mode and effect analysis (FMEA). This paper proposes that the methodology of FMEA be used as a tool to evaluate the main failures found in semihermetic reciprocating compressors used in refrigeration systems. Based on the results, some suggestions for maintenance are addressed.
Resumo:
This work explores the design of piezoelectric transducers based on functional material gradation, here named functionally graded piezoelectric transducer (FGPT). Depending on the applications, FGPTs must achieve several goals, which are essentially related to the transducer resonance frequency, vibration modes, and excitation strength at specific resonance frequencies. Several approaches can be used to achieve these goals; however, this work focuses on finding the optimal material gradation of FGPTs by means of topology optimization. Three objective functions are proposed: (i) to obtain the FGPT optimal material gradation for maximizing specified resonance frequencies; (ii) to design piezoelectric resonators, thus, the optimal material gradation is found for achieving desirable eigenvalues and eigenmodes; and (iii) to find the optimal material distribution of FGPTs, which maximizes specified excitation strength. To track the desirable vibration mode, a mode-tracking method utilizing the `modal assurance criterion` is applied. The continuous change of piezoelectric, dielectric, and elastic properties is achieved by using the graded finite element concept. The optimization algorithm is constructed based on sequential linear programming, and the concept of continuum approximation of material distribution. To illustrate the method, 2D FGPTs are designed for each objective function. In addition, the FGPT performance is compared with the non-FGPT one.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
introducing a pharmaceutical product on the market involves several stages of research. The scale-up stage comprises the integration of previous phases of development and their integration. This phase is extremely important since many process limitations which do not appear on the small scale become significant on the transposition to a large one. Since scientific literature presents only a few reports about the characterization of emulsified systems involving their scaling-up, this research work aimed at evaluating physical properties of non-ionic and anionic emulsions during their manufacturing phases: laboratory stage and scale-up. Prototype non-ionic (glyceryl monostearate) and anionic (potassium cetyl phosphate) emulsified systems had the physical properties by the determination of the droplet size (D[4,3 1, mu m) and rheology profile. Transposition occurred from a batch of 500-50,000 g. Semi-industrial manufacturing involved distinct conditions: intensity of agitation and homogenization. Comparing the non-ionic and anionic systems, it was observed that anionic emulsifiers generated systems with smaller droplet size and higher viscosity in laboratory scale. Besides that, for the concentrations tested, augmentation of the glyceryl monostearate emulsifier content provided formulations with better physical characteristics. For systems with potassium cetyl phosphate, droplet size increased with the elevation of the emulsifier concentration, suggesting inadequate stability. The scale-up provoked more significant alterations on the rheological profile and droplet size on the anionic systems than the non-ionic. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
In this work, we take advantage of association rule mining to support two types of medical systems: the Content-based Image Retrieval (CBIR) systems and the Computer-Aided Diagnosis (CAD) systems. For content-based retrieval, association rules are employed to reduce the dimensionality of the feature vectors that represent the images and to improve the precision of the similarity queries. We refer to the association rule-based method to improve CBIR systems proposed here as Feature selection through Association Rules (FAR). To improve CAD systems, we propose the Image Diagnosis Enhancement through Association rules (IDEA) method. Association rules are employed to suggest a second opinion to the radiologist or a preliminary diagnosis of a new image. A second opinion automatically obtained can either accelerate the process of diagnosing or to strengthen a hypothesis, increasing the probability of a prescribed treatment be successful. Two new algorithms are proposed to support the IDEA method: to pre-process low-level features and to propose a preliminary diagnosis based on association rules. We performed several experiments to validate the proposed methods. The results indicate that association rules can be successfully applied to improve CBIR and CAD systems, empowering the arsenal of techniques to support medical image analysis in medical systems. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
P>The aim of this research was to study spray drying as potential action to protect chlorophyllide from environmental conditions for shelf-life extension and characterisation of the powders. Six formulations were prepared with 7.5 and 10 g of carrier agents [gum Arabic (GA), maltodextrin (MA) and soybean protein isolate (SPI)]/100 mL of chlorophyllide solutions. The powders were evaluated for morphological characteristics (SEM), particle size, water activity, moisture, density, hygroscopicity, cold water solubility, sorption isotherms, colour and stability, during 90 days. All the powders were highly soluble, with solubility values around 97%. A significant lower hygroscopicity was observed for GA powders, whilst the lower X(m) values obtained by GAB equation fitting of the sorption isotherms was observed for the 7.5 g MA/100 mL samples. All formulations, but the 1 (7.5 g SPI/100 mL of chlorophyllide), provided excellent stability to the chlorophyllide during 90 days of storage even at room temperature.
Resumo:
This paper is about the use of natural language to communicate with computers. Most researches that have pursued this goal consider only requests expressed in English. A way to facilitate the use of several languages in natural language systems is by using an interlingua. An interlingua is an intermediary representation for natural language information that can be processed by machines. We propose to convert natural language requests into an interlingua [universal networking language (UNL)] and to execute these requests using software components. In order to achieve this goal, we propose OntoMap, an ontology-based architecture to perform the semantic mapping between UNL sentences and software components. OntoMap also performs component search and retrieval based on semantic information formalized in ontologies and rules.
Resumo:
Carbon nanotubes rank amongst potential candidates for a new family of nanoscopic devices, in particular for sensing applications. At the same time that defects in carbon nanotubes act as binding sites for foreign species, our current level of control over the fabrication process does not allow one to specifically choose where these binding sites will actually be positioned. In this work we present a theoretical framework for accurately calculating the electronic and transport properties of long disordered carbon nanotubes containing a large number of binding sites randomly distributed along a sample. This method combines the accuracy and functionality of ab initio density functional theory to determine the electronic structure with a recursive Green`s functions method. We apply this methodology on the problem of nitrogen-rich carbon nanotubes, first considering different types of defects and then demonstrating how our simulations can help in the field of sensor design by allowing one to compute the transport properties of realistic nanotube devices containing a large number of randomly distributed binding sites.