22 resultados para Complex impedance measurements
em Universidad Politécnica de Madrid
Resumo:
An impedance-based midspan debonding identification method for RC beams strengthened with FRP strips is presented in this paper using piezoelectric ceramic (PZT) sensor?actuators. To reach this purpose, firstly, a two-dimensional electromechanical impedance model is proposed to predict the electrical admittance of the PZT transducer bonded to the FRP strips of an RC beam. Considering the impedance is measured in high frequencies, a spectral element model of the bonded-PZT?FRP strengthened beam is developed. This model, in conjunction with experimental measurements of PZT transducers, is used to present an updating methodology to quantitatively detect interfacial debonding of these kinds of structures. To improve the performance and accuracy of the detection algorithm in a challenging problem such as ours, the structural health monitoring approach is solved with an ensemble process based on particle of swarm. An adaptive mesh scheme has also been developed to increase the reliability in locating the area in which debonding initiates. Predictions carried out with experimental results have showed the effectiveness and potential of the proposed method to detect prematurely at its earliest stages a critical failure mode such as that due to midspan debonding of the FRP strip.
Resumo:
Esta Tesis tiene como objetivo principal el desarrollo de métodos de identificación del daño que sean robustos y fiables, enfocados a sistemas estructurales experimentales, fundamentalmente a las estructuras de hormigón armado reforzadas externamente con bandas fibras de polímeros reforzados (FRP). El modo de fallo de este tipo de sistema estructural es crítico, pues generalmente es debido a un despegue repentino y frágil de la banda del refuerzo FRP originado en grietas intermedias causadas por la flexión. La detección de este despegue en su fase inicial es fundamental para prevenir fallos futuros, que pueden ser catastróficos. Inicialmente, se lleva a cabo una revisión del método de la Impedancia Electro-Mecánica (EMI), de cara a exponer sus capacidades para la detección de daño. Una vez la tecnología apropiada es seleccionada, lo que incluye un analizador de impedancias así como novedosos sensores PZT para monitorización inteligente, se ha diseñado un procedimiento automático basado en los registros de impedancias de distintas estructuras de laboratorio. Basándonos en el hecho de que las mediciones de impedancias son posibles gracias a una colocación adecuada de una red de sensores PZT, la estimación de la presencia de daño se realiza analizando los resultados de distintos indicadores de daño obtenidos de la literatura. Para que este proceso sea automático y que no sean necesarios conocimientos previos sobre el método EMI para realizar un experimento, se ha diseñado e implementado un Interfaz Gráfico de Usuario, transformando la medición de impedancias en un proceso fácil e intuitivo. Se evalúa entonces el daño a través de los correspondientes índices de daño, intentando estimar no sólo su severidad, sino también su localización aproximada. El desarrollo de estos experimentos en cualquier estructura genera grandes cantidades de datos que han de ser procesados, y algunas veces los índices de daño no son suficientes para una evaluación completa de la integridad de una estructura. En la mayoría de los casos se pueden encontrar patrones de daño en los datos, pero no se tiene información a priori del estado de la estructura. En este punto, se ha hecho una importante investigación en técnicas de reconocimiento de patrones particularmente en aprendizaje no supervisado, encontrando aplicaciones interesantes en el campo de la medicina. De ahí surge una idea creativa e innovadora: detectar y seguir la evolución del daño en distintas estructuras como si se tratase de un cáncer propagándose por el cuerpo humano. En ese sentido, las lecturas de impedancias se emplean como información intrínseca de la salud de la propia estructura, de forma que se pueden aplicar las mismas técnicas que las empleadas en la investigación del cáncer. En este caso, se ha aplicado un algoritmo de clasificación jerárquica dado que ilustra además la clasificación de los datos de forma gráfica, incluyendo información cualitativa y cuantitativa sobre el daño. Se ha investigado la efectividad de este procedimiento a través de tres estructuras de laboratorio, como son una viga de aluminio, una unión atornillada de aluminio y un bloque de hormigón reforzado con FRP. La primera ayuda a mostrar la efectividad del método en sencillos escenarios de daño simple y múltiple, de forma que las conclusiones extraídas se aplican sobre los otros dos, diseñados para simular condiciones de despegue en distintas estructuras. Demostrada la efectividad del método de clasificación jerárquica de lecturas de impedancias, se aplica el procedimiento sobre las estructuras de hormigón armado reforzadas con bandas de FRP objeto de esta tesis, detectando y clasificando cada estado de daño. Finalmente, y como alternativa al anterior procedimiento, se propone un método para la monitorización continua de la interfase FRP-Hormigón, a través de una red de sensores FBG permanentemente instalados en dicha interfase. De esta forma, se obtienen medidas de deformación de la interfase en condiciones de carga continua, para ser implementadas en un modelo de optimización multiobjetivo, cuya solución se haya por medio de una expansión multiobjetivo del método Particle Swarm Optimization (PSO). La fiabilidad de este último método de detección se investiga a través de sendos ejemplos tanto numéricos como experimentales. ABSTRACT This thesis aims to develop robust and reliable damage identification methods focused on experimental structural systems, in particular Reinforced Concrete (RC) structures externally strengthened with Fiber Reinforced Polymers (FRP) strips. The failure mode of this type of structural system is critical, since it is usually due to sudden and brittle debonding of the FRP reinforcement originating from intermediate flexural cracks. Detection of the debonding in its initial stage is essential thus to prevent future failure, which might be catastrophic. Initially, a revision of the Electro-Mechanical Impedance (EMI) method is carried out, in order to expose its capabilities for local damage detection. Once the appropriate technology is selected, which includes impedance analyzer as well as novel PZT sensors for smart monitoring, an automated procedure has been design based on the impedance signatures of several lab-scale structures. On the basis that capturing impedance measurements is possible thanks to an adequately deployed PZT sensor network, the estimation of damage presence is done by analyzing the results of different damage indices obtained from the literature. In order to make this process automatic so that it is not necessary a priori knowledge of the EMI method to carry out an experimental test, a Graphical User Interface has been designed, turning the impedance measurements into an easy and intuitive procedure. Damage is then assessed through the analysis of the corresponding damage indices, trying to estimate not only the damage severity, but also its approximate location. The development of these tests on any kind of structure generates large amounts of data to be processed, and sometimes the information provided by damage indices is not enough to achieve a complete analysis of the structural health condition. In most of the cases, some damage patterns can be found in the data, but none a priori knowledge of the health condition is given for any structure. At this point, an important research on pattern recognition techniques has been carried out, particularly on unsupervised learning techniques, finding interesting applications in the medicine field. From this investigation, a creative and innovative idea arose: to detect and track the evolution of damage in different structures, as if it were a cancer propagating through a human body. In that sense, the impedance signatures are used to give intrinsic information of the health condition of the structure, so that the same clustering algorithms applied in the cancer research can be applied to the problem addressed in this dissertation. Hierarchical clustering is then applied since it also provides a graphical display of the clustered data, including quantitative and qualitative information about damage. The performance of this approach is firstly investigated using three lab-scale structures, such as a simple aluminium beam, a bolt-jointed aluminium beam and an FRP-strengthened concrete specimen. The first one shows the performance of the method on simple single and multiple damage scenarios, so that the first conclusions can be extracted and applied to the other two experimental tests, which are designed to simulate a debonding condition on different structures. Once the performance of the impedance-based hierarchical clustering method is proven to be successful, it is then applied to the structural system studied in this dissertation, the RC structures externally strengthened with FRP strips, where the debonding failure in the interface between the FRP and the concrete is successfully detected and classified, proving thus the feasibility of this method. Finally, as an alternative to the previous approach, a continuous monitoring procedure of the FRP-Concrete interface is proposed, based on an FBGsensors Network permanently deployed within that interface. In this way, strain measurements can be obtained under controlled loading conditions, and then they are used in order to implement a multi-objective model updating method solved by a multi-objective expansion of the Particle Swarm Optimization (PSO) method. The feasibility of this last proposal is investigated and successfully proven on both numerical and experimental RC beams strengthened with FRP.
Resumo:
The application of the Electro-Mechanical Impedance (EMI) method for damage detection in Structural Health Monitoring has noticeable increased in recent years. EMI method utilizes piezoelectric transducers for directly measuring the mechanical properties of the host structure, obtaining the so called impedance measurement, highly influenced by the variations of dynamic parameters of the structure. These measurements usually contain a large number of frequency points, as well as a high number of dimensions, since each frequency range swept can be considered as an independent variable. That makes this kind of data hard to handle, increasing the computational costs and being substantially time-consuming. In that sense, the Principal Component Analysis (PCA)-based data compression has been employed in this work, in order to enhance the analysis capability of the raw data. Furthermore, a Support Vector Machine (SVM), which has been widespread used in machine learning and pattern recognition fields, has been applied in this study in order to model any possible existing pattern in the PCAcompress data, using for that just the first two Principal Components. Different known non-damaged and damaged measurements of an experimental tested beam were used as training input data for the SVM algorithm, using as test input data the same amount of cases measured in beams with unknown structural health conditions. Thus, the purpose of this work is to demonstrate how, with a few impedance measurements of a beam as raw data, its healthy status can be determined based on pattern recognition procedures.
Resumo:
NADPH:protochlorophyllide oxidoreductase (POR; EC1.1.33.1) is a key enzyme for the light-induced greening of angiosperms. In barley, two POR proteins exist, termed PORA and PORB. These have previously been proposed to form higher molecular weight light-harvesting complexes in the prolamellar body of etioplasts (Reinbothe, C., Lebedev, N., and Reinbothe, S. (1999)Nature 397, 80–84). Here we report the in vitro reconstitution of such complexes from chemically synthesized protochlorophyllides (Pchlides) a andb and galacto- and sulfolipids. Low temperature (77 K) fluorescence measurements revealed that the reconstituted, lipid-containing complex displayed the same characteristics of photoactive Pchlide 650/657 as the presumed native complex in the prolamellar body. Moreover, Pchlide F650/657 was converted to chlorophyllide (Chlide) 684/690 upon illumination of the reconstituted complex with a 1-ms flash of white light. Identification and quantification of acetone-extractable pigments revealed that only the PORB-bound Pchlide a had been photoactive and was converted to Chlide a, whereas Pchlide b bound to the PORA remained photoinactive. Nondenaturing PAGE of the reconstituted Pchlide a/b-containing complex further demonstrated a size similar to that of the presumed native complexin vivo, suggesting that both complexes may be identical.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
We present a technique to reconstruct the electromagnetic properties of a medium or a set of objects buried inside it from boundary measurements when applying electric currents through a set of electrodes. The electromagnetic parameters may be recovered by means of a gradient method without a priori information on the background. The shape, location and size of objects, when present, are determined by a topological derivative-based iterative procedure. The combination of both strategies allows improved reconstructions of the objects and their properties, assuming a known background.
Resumo:
This work describes the assessment of the acoustic properties of sputtered tantalum oxide films intended for use as high-impedance films of acoustic reflectors for solidly mounted resonators operating in the gigahertz frequency range. The films are grown by sputtering a metallic tantalum target under different oxygen and argon gas mixtures, total pressures, pulsed dc powers, and substrate biases. The structural properties of the films are assessed through infrared absorption spectroscopy and X-ray diffraction measurements. Their acoustic impedance is assessed by deriving the mass density from X-ray reflectometry measurements and the acoustic velocity from picosecond acoustic spectroscopy and the analysis of the frequency response of the test resonators.
Resumo:
Reverberation chambers are well known for providing a random-like electric field distribution. Detection of directivity or gain thereof requires an adequate procedure and smart post-processing. In this paper, a new method is proposed for estimating the directivity of radiating devices in a reverberation chamber (RC). The method is based on the Rician K-factor whose estimation in an RC benefits from recent improvements. Directivity estimation relies on the accurate determination of the K-factor with respect to a reference antenna. Good agreement is reported with measurements carried out in near-field anechoic chamber (AC) and using a near-field to far-field transformation.
Resumo:
Complex networks have been extensively used in the last decade to characterize and analyze complex systems, and they have been recently proposed as a novel instrument for the analysis of spectra extracted from biological samples. Yet, the high number of measurements composing spectra, and the consequent high computational cost, make a direct network analysis unfeasible. We here present a comparative analysis of three customary feature selection algorithms, including the binning of spectral data and the use of information theory metrics. Such algorithms are compared by assessing the score obtained in a classification task, where healthy subjects and people suffering from different types of cancers should be discriminated. Results indicate that a feature selection strategy based on Mutual Information outperforms the more classical data binning, while allowing a reduction of the dimensionality of the data set in two orders of magnitude
Resumo:
Use of computational fluid dynamic (CFD) methods to predict the power production from wind entire wind farms in flat and complex terrain is presented in this paper. Two full 3D Navier–Stokes solvers for incompressible flow are employed that incorporate the k–ε and k–ω turbulence models respectively. The wind turbines (W/Ts) are modelled as momentum absorbers by means of their thrust coefficient using the actuator disk approach. The WT thrust is estimated using the wind speed one diameter upstream of the rotor at hub height. An alternative method that employs an induction-factor based concept is also tested. This method features the advantage of not utilizing the wind speed at a specific distance from the rotor disk, which is a doubtful approximation when a W/T is located in the wake of another and/or the terrain is complex. To account for the underestimation of the near wake deficit, a correction is introduced to the turbulence model. The turbulence time scale is bounded using the general “realizability” constraint for the turbulent velocities. Application is made on two wind farms, a five-machine one located in flat terrain and another 43-machine one located in complex terrain. In the flat terrain case, the combination of the induction factor method along with the turbulence correction provides satisfactory results. In the complex terrain case, there are some significant discrepancies with the measurements, which are discussed. In this case, the induction factor method does not provide satisfactory results.
Resumo:
Computational fluid dynamic (CFD) methods are used in this paper to predict the power production from entire wind farms in complex terrain and to shed some light into the wake flow patterns. Two full three-dimensional Navier–Stokes solvers for incompressible fluid flow, employing k − ϵ and k − ω turbulence closures, are used. The wind turbines are modeled as momentum absorbers by means of their thrust coefficient through the actuator disk approach. Alternative methods for estimating the reference wind speed in the calculation of the thrust are tested. The work presented in this paper is part of the work being undertaken within the UpWind Integrated Project that aims to develop the design tools for next generation of large wind turbines. In this part of UpWind, the performance of wind farm and wake models is being examined in complex terrain environment where there are few pre-existing relevant measurements. The focus of the work being carried out is to evaluate the performance of CFD models in large wind farm applications in complex terrain and to examine the development of the wakes in a complex terrain environment.
Resumo:
FBGs are excellent strain sensors, because of its low size and multiplexing capability. Tens to hundred of sensors may be embedded into a structure, as it has already been demonstrated. Nevertheless, they only afford strain measurements at local points, so unless the damage affects the strain readings in a distinguishable manner, damage will go undetected. This paper show the experimental results obtained on the wing of a UAV, instrumented with 32 FBGs, before and after small damages were introduced. The PCA algorithm was able to distinguish the damage cases, even for small cracks. Principal Component Analysis (PCA) is a technique of multivariable analysis to reduce a complex data set to a lower dimension and reveal some hidden patterns that underlie.
Resumo:
1. Canopies are complex multilayered structures comprising individual plant crowns exposing a multifaceted surface area to sunlight. Foliage arrangement and properties are the main mediators of canopy functions. The leaves act as light traps whose exposure to sunlight varies with time of the day, date and latitude in a trade-off between photosynthetic light harvesting and excessive or photoinhibitory light avoidance. To date, ecological research based upon leaf sampling has been limited by the available echnology, with which data acquisition becomes labour intensive and time-consuming, given the verwhelming number of leaves involved. 2. In the present study, our goal involved developing a tool capable of easuring a sufficient number of leaves to enable analysis of leaf populations, tree crowns and canopies.We specifically tested whether a cell phone working as a 3Dpointer could yield reliable, repeatable and valid leaf anglemeasurements with a simple gesture. We evaluated the accuracy of this method under controlled conditions, using a 3D digitizer, and we compared performance in the field with the methods commonly used. We presented an equation to estimate the potential proportion of the leaf exposed to direct sunlight (SAL) at any given time and compared the results with those obtained bymeans of a graphicalmethod. 3. We found a strong and highly significant correlation between the graphical methods and the equation presented. The calibration process showed a strong correlation between the results derived from the two methods with amean relative difference below 10%. Themean relative difference in calculation of instantaneous exposure was below 5%. Our device performed equally well in diverse locations, in which we characterized over 700 leaves in a single day. 4. The newmethod, involving the use of a cell phone, ismuchmore effective than the traditionalmethods or digitizers when the goal is to scale up from leaf position to performance of leaf populations, tree crowns or canopies. Our methodology constitutes an affordable and valuable tool within which to frame a wide range of ecological hypotheses and to support canopy modelling approaches.
Resumo:
The impedance-based stability-assessment method has turned out to be a very effective tool and its usage is rapidly growing in different applications ranging from the conventional interconnected dc/dc systems to the grid-connected renewable energy systems. The results are sometime given as a certain forbidden region in the complex plane out of which the impedance ratio--known as minor-loop gain--shall stay for ensuring robust stability. This letter discusses the circle-like forbidden region occupying minimum area in the complex plane, defined by applying maximum peak criteria, which is well-known theory in control engineering. The investigation shows that the circle-like forbidden region will ensure robust stability only if the impedance-based minor-loop gain is determined at the very input or output of each subsystem within the interconnected system. Experimental evidence is provided based on a small-scale dc/dc distributed system.
Resumo:
Locating stator-winding ground faults accurately is a very difficult task. In this paper the grounding circuit measurements are evaluated in order to obtain information about the stator ground-fault location in synchronous generators. In power generators grounded through a high impedance, the relation between the neutral voltage and the phase voltage provide a first estimation of the fault location. The location error by using this ratio depends on the fault resistance and the value of the capacitance to ground of the stator winding. However, the error added by ignoring the value of the fault resistance is the most relevant term. This location estimation and the location error have been evaluated through the data of a real synchronous machine.