34 resultados para Electric power systems -- Quality control
em Universidad Politécnica de Madrid
Resumo:
There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.
Resumo:
Short-term variability in the power generated by large grid-connected photovoltaic (PV) plants can negatively affect power quality and the network reliability. New grid-codes require combining the PV generator with some form of energy storage technology in order to reduce short-term PV power fluctuation. This paper proposes an effective method in order to calculate, for any PV plant size and maximum allowable ramp-rate, the maximum power and the minimum energy storage requirements alike. The general validity of this method is corroborated with extensive simulation exercises performed with real 5-s one year data of 500 kW inverters at the 38.5 MW Amaraleja (Portugal) PV plant and two other PV plants located in Navarra (Spain), at a distance of more than 660 km from Amaraleja.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The confluence of three-dimensional (3D) virtual worlds with social networks imposes on software agents, in addition to conversational functions, the same behaviours as those common to human-driven avatars. In this paper, we explore the possibilities of the use of metabots (metaverse robots) with motion capabilities in complex virtual 3D worlds and we put forward a learning model based on the techniques used in evolutionary computation for optimizing the fuzzy controllers which will subsequently be used by metabots for moving around a virtual environment.
Resumo:
In October 2002, under the auspices of Spanish Cooperation, a pilot electrification project put into operation two centralised PV-diesel hybrid systems in two different Moroccan villages. These systems currently provide a full-time energy service and supply electricity to more than a hundred of families, six community buildings, street lighting and one running water system. The appearance of the electricity service is very similar to an urban one: one phase AC supply (230V/50Hz) distributed up to each dwelling using a low-voltage mini-grid, which has been designed to be fully compatible with a future arrival of the utility grid. The management of this electricity service is based on a “fee-for-service” scheme agreed between a local NGO, partner of the project, and electricity associations created in each village, which are in charge of, among other tasks, recording the daily energy production of systems and the monthly energy consumption of each house. This register of data allows a systematic evaluation of both the system performance and the energy consumption of users. Now, after four years of operation, this paper presents the experience of this pilot electrification project and draws lessons that can be useful for designing, managing and sizing this type of small village PV-hybrid system
Resumo:
In this paper, we use ARIMA modelling to estimate a set of characteristics of a short-term indicator (for example, the index of industrial production), as trends, seasonal variations, cyclical oscillations, unpredictability, deterministic effects (as a strike), etc. Thus for each sector and product (more than 1000), we construct a vector of values corresponding to the above-mentioned characteristics, that can be used for data editing.
Resumo:
Este proyecto trata de diseñar el sistema eléctrico y de control de potencia de una maqueta del túnel aerodinámico ACLA-16 de la Universidad Politécnica de Madrid (UPM). Dicha maqueta se utiliza para estudiar el efecto de la capa límite atmosférica, debido a su importancia en el impacto sobre estructuras civiles. Primero se desarrolla una parte teórica sobre qué son los túneles aerodinámicos, las aplicaciones que tienen y conceptos básicos acerca de la capa límite atmosférica. Luego se analiza el diseño geométrico de la maqueta del túnel y se detallan los elementos que debe tener el sistema eléctrico. Además, se realiza una simulación por ordenador con un programa de CFD (Fluent) para comparar los resultados experimentales reales con los resultados numéricos de la simulación para comprobar si se pueden extraer resultados aceptables por ordenador y así ahorrar costes y tiempo en el estudio de ensayos.
Resumo:
El proyecto está basado en el estudio de la planta de potencia de un túnel aerodinámico. Para ello se ha realizado una breve introducción definiendo qué es un túnel aerodinámico, cuál es su propósito, qué tipos hay, etc. Posteriormente se ha escogido un tipo concreto de túnel entre todas las posibilidades y se ha procedido a su estudio. Se ha definido una forma y unas dimensiones y tras calcular las pérdidas de carga, se ha seleccionado la planta de potencia necesaria para compensar dichas pérdidas, dimensionándose también las conexiones de esta desde la acometida de potencia eléctrica. Por último se han dimensionado las conexiones correspondientes a la iluminación y los servicios que competen al túnel aerodinámico.
Resumo:
La utilización de túneles aerodinámicos en ingeniería civil está cada vez más demandada debido al actual desarrollo urbanístico, esto es, la necesidad de edificios cada vez más altos en los que concentrar mayor cantidad de población, puentes y estructuras que faciliten el paso de medios de transporte alternativos, la importancia de los aspectos artísticos en la construcción (además de los funcionales), etc. Son muchos los factores que pueden hacer necesario el ensayo de alguna de esas estructuras en un túnel aerodinámico, y no existe un criterio universal a la hora de decidir si conviene o no hacerlo.
Resumo:
La emisión de polvo por efecto del viento desde depósitos de residuos mineros o industriales y el paso de vehículos en vías no pavimentadas, es un problema que afecta las actividades productivas; el ambiente y la salud de las personas que permanecen en el área contaminada. En Chile, en los últimos años la sensibilidad social y las exigencias ambientales han aumentado, así como la oferta de diferentes supresores y tecnologías de aplicación. Se han revisado las causas que provocan emisión de polvo y las tecnologías disponibles en Chile para la supresión de polvo, además de las metodologías y normativa para evaluar el desempeño de los materiales tratados con diferentes supresores. En algunos casos no es posible comparar propiedades de desempeño, como durabilidad, dosis a aplicar y frecuencia de las aplicaciones, entre otros aspectos. Los procedimientos descritos en la norma NCh3266-2012 permiten evaluar la erosión eólica en depósitos de residuos, sitios eriazos y caminos no pavimentados, entre otros, junto con evaluar el desempeño de diferentes tipos de supresores de polvo a partir de datos objetivos comparables. Esto permite seleccionar el supresor más adecuado, mejorar la eficiencia de los tratamientos, optimizar los costos y mejorar los procesos productivos. Palabras clave: Erosión-eólica, supresor de polvo, residuos-mineros, caminos-no pavimentados. Dust emissions by wind effect from mining deposits or industrial waste and passing vehicles on unpaved roads, is a problem that affects the productive activities; the environment and the health of those who remain in the contaminated area. The social sensitivity and environmental requirements on this issue in Chile have increased, as well as offering different suppressors and application technologies. Have been reviewed the causes of dust emission and technologies available in Chile for dust suppression, plus methodologies and standards for assessing the performance of the treated materials with different suppressors. In some cases it is not possible to compare performance properties such as durability, application dose and frequency of applications, among others aspects. The procedures described in the NCh 3266-2012 standard allows the assessment of wind erosion in waste deposits, vacant lots and unpaved roads, among others, along with evaluating the performance of different types of dust suppressants from comparable objective data. This allows selecting the most suitable suppressor, improve efficiency of treatments, optimize costs and improve production processes. Keywords: Wind-erosion, dust-suppressor, mining-waste, unpavedroads
Resumo:
This paper proposes a method for the identification of different partial discharges (PDs) sources through the analysis of a collection of PD signals acquired with a PD measurement system. This method, robust and sensitive enough to cope with noisy data and external interferences, combines the characterization of each signal from the collection, with a clustering procedure, the CLARA algorithm. Several features are proposed for the characterization of the signals, being the wavelet variances, the frequency estimated with the Prony method, and the energy, the most relevant for the performance of the clustering procedure. The result of the unsupervised classification is a set of clusters each containing those signals which are more similar to each other than to those in other clusters. The analysis of the classification results permits both the identification of different PD sources and the discrimination between original PD signals, reflections, noise and external interferences. The methods and graphical tools detailed in this paper have been coded and published as a contributed package of the R environment under a GNU/GPL license.
Resumo:
En el artículo se discute el papel de la energía hidroeléctrica en el marco del sistema eléctrico español, donde existe una elevada penetración de energías no gestionables con una tendencia clara a aumentar en los próximos años. El desarrollo de nuevas centrales hidroeléctricas se basará probablemente en centrales reversibles. La energía hidroeléctrica es una tecnología madura y eficiente para el almacenamiento de energía a gran escala y contribuye por tanto de manera decisiva a la integración de fuentes renovables no gestionables. Los beneficios obtenidos con la operación punta-valle pueden ser insuficientes para compensar el coste de una nueva central. Sin embargo, los ingresos obtenidos pueden incrementarse sustancialmente mediante su participación en los servicios de ajuste del sistema. Ello requeriría un diseño apropiado del mercado eléctrico. La contribución de las centrales hidráulicas reversibles al balance producción-consumo puede extenderse a las horas valle utilizando, bien bombeo en velocidad variable o bien una configuración de cortocircuito hidráulico. La necesidad de mitigar los efectos hidrológicos aguas abajo de las centrales hidroeléctricas puede introducir algunas restricciones en la operación que limitaría de algún modo los servicios descritos más arriba. Sin embargo, cabe esperar que los efectos ambientales provocados por las centrales hidráulicas reversibles sean significativamente menores. In this paper the role of hydropower in electric power systems is discussed, in the framework of the Spanish system, where a high penetration of intermittent power sources exists, showing a clear trend to increase in next years. The development of new hydro power facilities will be likely based on pumped storage hydro power plants. Hydropower is a mature and efficient technology for large-scale energy storage and therefore represents a key contribution for the integration of intermittent power sources, such as wind or photovoltaic. The benefits obtained from load shifting may be insufficient to compensate the costs of a new plant. However, the obtained revenues can significantly increase through its contribution to providing ancillary services. This would require an appropriate design of the electricity market. The contribution of pumped storage hydro power plants to balancing services can be extended to off-peak hours, using either variable speed pumping or the hydraulic shortcircuit configuration. The need to mitigate hydrological effects downstream of hydro plants may introduce some operational constraints which could limit to some extent the services described above. However environmental effects caused by pumped storage hydro power plants are expected to be significantly smaller.
Resumo:
This paper proposes a method for the identification of different partial discharges (PDs) sources through the analysis of a collection of PD signals acquired with a PD measurement system. This method, robust and sensitive enough to cope with noisy data and external interferences, combines the characterization of each signal from the collection, with a clustering procedure, the CLARA algorithm. Several features are proposed for the characterization of the signals, being the wavelet variances, the frequency estimated with the Prony method, and the energy, the most relevant for the performance of the clustering procedure. The result of the unsupervised classification is a set of clusters each containing those signals which are more similar to each other than to those in other clusters. The analysis of the classification results permits both the identification of different PD sources and the discrimination between original PD signals, reflections, noise and external interferences. The methods and graphical tools detailed in this paper have been coded and published as a contributed package of the R environment under a GNU/GPL license.
Resumo:
The run-of-river hydro power plant usually have low or nil water storage capacity, and therefore an adequate control strategy is required to keep the water level constant in pond. This paper presents a novel technique based on TSK fuzzy controller to maintain the pond head constant. The performance is investigated over a wide range of hill curve of hydro turbine. The results are compared with PI controller as discussed in [1].
Resumo:
Due to the fast rate of peach post-harvest ripening, damage due to mechanical handling, externally appreciated as bruises and soft areas, is a real problem that leads to an early harvesting and poor quality of the fruits, as perceived by the consumers. More and more, the European consumer asks for good taste and freshness of fruits and vegetables, and these quality factors are not included in standards, nor in most of the producers' practices. Fruit processing and marketing centres (co-operatives) are increasingly interested in adopting quality controls in their processes. ISO 9000 procedures are being applied in some food areas, primarily milk and meat processors, but no generalised procedures have been developed until the present time to be applied to fresh product processes. All different peach and nectarine varieties that are harvested and handled in Murcia cooperatives and sold in a large supermarket in Madrid were analysed during the whole 1997 season (early May to late August). A total number of 78 samples of 25 fruits (co-operative) or 10 fruits (market), were tested in the laboratory for mechanical, optical, chemical and tasting quality. The variability and relationships between all these quality parameters are presented and discussed, and sampling unit sizes which would be advisable for quality control are calculated.