47 resultados para Fluid and crystallized Intelligence

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se investiga de forma experimental el transporte pasivo de magnitudes físicas en micro-sistemas con carácter de inmediata aplicación industrial, usando métodos innovadores para mejorar la eficiencia de los mismos optimizando parámetros críticos del diseño o encontrar nuevos destinos de posible aplicación. Parte de los resultados obtenidos en estos experimentos han sido publicados en revistas con un índice de impacto tal que pertenecen al primer cuarto del JCR. Primero de todo se ha analizado el efecto que produce en un intercambiador de calor basado en micro-canales el hecho de dejar un espacio entre canales y tapa superior para la interconexión de los mismos. Esto genera efectos tridimensionales que mejoran la exracción de calor del intercambiador y reducen la caída de presión que aparece por el transcurso del fluido a través de los micro-canales, lo que tiene un gran impacto en la potencia que ha de suministrar la bomba de refrigerante. Se ha analizado también la mejora producida en términos de calor disipado de un micro-procesador refrigerado con un ampliamente usado plato de aletas al implementar en éste una cámara de vapor que almacena un fluido bifásico. Se ha desarrollado de forma paralela un modelo numérico para optimizar las nuevas dimensiones del plato de aletas modificado compatibles con una serie de requerimientos de diseño en el que tanto las dimensiones como el peso juegan un papel esencial. Por otro lado, se han estudiado los fenomenos fluido-dinámicos que aparecen aguas abajo de un cuerpo romo en el seno de un fluido fluyendo por un canal con una alta relación de bloqueo. Los resultados de este estudio confirman, de forma experimental, la existencia de un régimen intermedio, caracterizado por el desarrollo de una burbuja de recirculación oscilante entre los regímenes, bien diferenciados, de burbuja de recirculación estacionaria y calle de torbellinos de Karman, como función del número de Reynolds del flujo incidente. Para la obtención, análisis y post-proceso de los datos, se ha contado con la ayuda de un sistema de Velocimetría por Imágenes de Partículas (PIV). Finalmente y como adición a este último punto, se ha estudiado las vibraciones de un cuerpo romo producidas por el desprendimiento de torbellinos en un canal de alta relación de bloqueo con la base obtenida del estudio anterior. El prisma se mueve con un movimiento armónico simple para un intervalo de números de Reynolds y este movimiento se transforma en vibración alrededor de su eje a partir de un ciero número de Reynolds. En relación al fluido, el régimen de desprendimiento de torbellinos se alcanza a menores números de Reynolds que en el caso de tener el cuerpo romo fijo. Uniendo estos dos registros de movimientos y variando la relación de masas entre prisma y fluido se obtiene un mapa con diferentes estados globales del sistema. Esto no solo tiene aplicación como método para promover el mezclado sino también como método para obtener energía a partir del movimiento del cuerpo en el seno del fluido. Abstract In this thesis, experimental research focused on passive scalar transport is performed in micro-systems with marked sense of industrial application, using innovative methods in order to obtain better performances optimizing critical design parameters or finding new utilities. Part of the results obtained in these experiments have been published into high impact factor journals belonged to the first quarter of the Journal Citation Reports (JCR). First of all the effect of tip clearance in a micro-channel based heat sink is analyzed. Leaving a gap between channels and top cover, letting the channels communicate each other causes three-dimensional effects which improve the heat transfer between fluid and heat sink and also reducing the pressure drop caused by the fluid passing through the micro-channels which has a great impact on the total cooling pumping power needed. It is also analyzed the enhancement produced in terms of dissipated heat in a micro-processor cooling system by improving the predominantly used fin plate with a vapour chamber based heat spreader which contains a two-phase fluid inside. It has also been developed at the same time a numerical model to optimize the new fin plate dimensions compatible with a series of design requirements in which both size and wight plays a very restrictive role. On the other hand, fluid-dynamics phenomena that appears downstream of a bluff body in the bosom of a fluid flow with high blockage ratio has been studied. This research experimentally confirms the existence of an intermediate regime characterized by an oscillating closed recirculation bubble intermediate regime between the steady closed recirculation bubble regime and the vortex shedding regime (Karman street like regime) as a function of the incoming flow Reynolds number. A particle image velocimetry technique (PIV) has been used in order to obtain, analyze and post-process the fluid-dynamic data. Finally and as an addition to the last point, a study on the vortexinduced vibrations (VIV) of a bluff body inside a high blockage ratio channel has been carried out taking advantage of the results obtained with the fixed square prism. The prism moves with simple harmonic motion for a Reynolds number interval and this movement becomes vibrational around its axial axis after overcoming at definite Reynolds number. Regarding the fluid, vortex shedding regime is reached at Reynolds numbers lower than the previous critical ones. Merging both movement spectra and varying the square prism to fluid mass ratio, a map with different global states is reached. This is not only applicable as a mixing enhancement technique but as an energy harvesting method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a method to detect Microcalcifications in Regions of Interest from digitized mammograms. The method is based mainly on the combination of Image Processing, Pattern Recognition and Artificial Intelligence. The Top-Hat transform is a technique based on mathematical morphology operations that, in this work is used to perform contrast enhancement of microcalcifications in the region of interest. In order to find more or less homogeneous regions in the image, we apply a novel image sub-segmentation technique based on Possibilistic Fuzzy c-Means clustering algorithm. From the original region of interest we extract two window-based features, Mean and Deviation Standard, which will be used in a classifier based on a Artificial Neural Network in order to identify microcalcifications. Our results show that the proposed method is a good alternative in the stage of microcalcifications detection, because this stage is an important part of the early Breast Cancer detection

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates to what degree speakers with different verbal intelligence may adapt to each other. The work is based on a corpus consisting of 100 descriptions of a short film (monologues), 56 discussions about the same topic (dialogues), and verbal intelligence scores of the test participants. Adaptation between two dialogue partners was measured using cross-referencing, proportion of "I", "You" and "We" words, between-subject correlation and similarity of texts. It was shown that lower verbal intelligence speakers repeated more nouns and adjectives from the other and used the same linguistic categories more often than higher verbal intelligence speakers. In dialogues between strangers, participants with higher verbal intelligence showed a greater level of adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigated differences in language use of speakers yielding different verbal intelligence when they describe the same event. The work is based on a corpus containing descriptions of a short film and verbal intelligence scores of the speakers. For analyzing the monologues and the film transcript, the number of reused words, lemmas, n-grams, cosine similarity and other features were calculated and compared to each other for different verbal intelligence groups. The results showed that the similarity of monologues of higher verbal intelligence speakers was greater than of lower and average verbal intelligence participants. A possible explanation of this phenomenon is that candidates yielding higher verbal intelligence have a better short-term memory. In this paper we also checked a hypothesis that differences in vocabulary of speakers yielding different verbal intelligence are sufficient enough for good classification results. For proving this hypothesis, the Nearest Neighbor classifier was trained using TF-IDF vocabulary measures. The maximum achieved accuracy was 92.86%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we investigated whether there is a relationship between dominant behaviour of dialogue participants and their verbal intelligence. The analysis is based on a corpus containing 56 dialogues and verbal intelligence scores of the test persons. All the dialogues were divided into three groups: H-H is a group of dialogues between higher verbal intelligence participants, L-L is a group of dialogues between lower verbal intelligence participant and L-H is a group of all the other dialogues. The dominance scores of the dialogue partners from each group were analysed. The analysis showed that differences between dominance scores and verbal intelligence coefficients for L-L were positively correlated. Verbal intelligence scores of the test persons were compared to other features that may reflect dominant behaviour. The analysis showed that number of interruptions, long utterances, times grabbed the floor, influence diffusion model, number of agreements and several acoustic features may be related to verbal intelligence. These features were used for the automatic classification of the dialogue partners into two groups (lower and higher verbal intelligence participants); the achieved accuracy was 89.36%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. Our approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via data driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The proposed approach is evaluated in synthetic and real environments, showing its satisfactory performance in both ambiguous and unambiguous settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El interés por los sistemas fotovoltaicos de concentración (CPV) ha resurgido en los últimos años amparado por el desarrollo de células multiunión de muy alta eficiencia basadas en semiconductores de los grupos III-V. Estas células han permitido obtener módulos de concentración con eficiencias que prácticamente duplican las del panel plano y que llegan al 35% en los módulos récord. Esta tesis está dedicada al diseño y la implementación experimental de nuevos conceptos que permitan obtener módulos CPV que no sólo alcancen una eficiencia alta en condiciones estándar sino que, además, sean lo suficientemente tolerantes a errores de montaje, seguimiento, temperatura y variaciones espectrales para que la energía que producen a lo largo del año sea máxima. Una de las primeras cuestiones que se abordan es el diseño de elementos ópticos secundarios para sistemas cuyo primario es una lente de Fresnel y que permiten, para una concentración fija, aumentar el ángulo de aceptancia y la tolerancia del sistema. Varios secundarios reflexivos y refractivos han sido diseñados y analizados mediante trazado de rayos. En particular, utilizando óptica anidólica y basándose en el diseño de una sola etapa conocido como ‘concentrador dieléctrico que funciona por reflexión total interna‘, se ha diseñado, fabricado y caracterizado un secundario con salida cuadrada que, usado junto con una lente de Fresnel, permite alcanzar simultáneamente una elevada eficiencia, concentración y aceptancia. Además, se ha propuesto y prototipado un método alternativo de fabricación para otro de los secundarios, denominado domo, consistente en el sobremoldeo de silicona sobre células solares. Una de las características que impregna todo el trabajo realizado en esta tesis es la aproximación holística en el diseño de módulos CPV, es decir, se ha prestado especial atención al diseño conjunto de la célula y la óptica para garantizar que el sistema total alcance la mayor eficiencia posible. En este sentido muchos sistemas ópticos desarrollados en esta tesis han sido diseñados, caracterizados y optimizados teniendo en cuenta que el ajuste de corriente entre las distintas subcélulas que comprenden la célula multiunión bajo el concentrador sea muy próximo a uno. La capa antirreflectante sobre la célula funciona, en cierto modo, como interfaz entre la óptica y la célula, por lo que se ha diseñado un método de optimización de capas antirreflectantes que considera no sólo el amplio rango de longitudes de onda para el que las células multiunión son sensibles sino también la distribución angular de intensidad sobre la célula creada por la óptica de concentración. Además, la cuestión de la falta de uniformidad también se ha abordado mediante la comparación de las distribuciones espectrales y espaciales de irradiancia que crean diferentes ópticas (simuladas mediante trazado de rayos y fotografiadas) y las pérdidas de eficiencia que experimentan las células iluminadas por dichas ópticas de concentración medidas experimentalmente. El efecto de la temperatura en la óptica de concentración también ha sido objeto de estudio de esta tesis. En particular, mediante simulaciones de elementos finitos se han dado los primeros pasos para el análisis de las deformaciones que sufren los dientes de las lentes de Fresnel híbridas (vidrio-silicona), así como el cambio de índice de refracción con la temperatura y la influencia de ambos efectos sobre el funcionamiento de los sistemas. Se ha implementado un modelo que tiene por objeto considerar las variaciones ambientales, principalmente temperatura y contenido espectral de la radiación directa, así como las sensibilidades térmica y espectral de los sistemas CPV, con el fin de maximizar la energía producida por un módulo de concentración a lo largo de un año en un emplazamiento determinado. Los capítulos 5 y 6 de este libro están dedicados al diseño, fabricación y caracterización de un nuevo concepto de módulo fotovoltaico denominado FluidReflex y basado en una única etapa reflexiva con dieléctrico fluido. En este nuevo concepto la presencia del fluido aporta algunas ventajas significativas como son: un aumento del producto concentración por aceptancia (CAP, en sus siglas en inglés) alcanzable al rodear la célula con un medio cuyo índice de refracción es mayor que uno, una mejora de la eficiencia óptica al disminuir las pérdidas por reflexión de Fresnel en varias interfaces, una mejora de la disipación térmica ya que el calor que se concentra junto a la célula se trasmite por convección natural y conducción en el fluido y un aislamiento eléctrico mejorado. Mediante la construcción y medida de varios prototipos de unidad elemental se ha demostrado que no existe ninguna razón fundamental que impida la implementación práctica del concepto teórico alcanzando una elevada eficiencia. Se ha realizado un análisis de fluidos candidatos probando la existencia de al menos dos de ellos que cumplen todos los requisitos (en particular el de estabilidad bajo condiciones de luz concentrada) para formar parte del sistema de concentración FluidReflex. Por ´ultimo, se han diseñado, fabricado y caracterizado varios prototipos preindustriales de módulos FluidReflex para lo cual ha sido necesario optimizar el proceso de fabricación de la óptica multicavidad a fin de mantener el buen comportamiento óptico obtenido en la fabricación de la unidad elemental. Los distintos prototipos han sido medidos, tanto en el laboratorio como bajo el sol real, analizando el ajuste de corriente de la célula iluminada por el concentrador FluidReflex bajo diferentes distribuciones espectrales de la radiación incidente así como el excelente comportamiento térmico del módulo. ABSTRACT A renewed interest in concentrating photovoltaic (CPV) systems has emerged in recent years encouraged by the development of high-efficiency multijunction solar cells based in IIIV semiconductors that have led to CPV module efficiencies which practically double that of flat panel PV and which reach 35% for record modules. This thesis is devoted to the design and experimental implementation of new concepts for obtaining CPV modules that not only achieve high efficiency under standard conditions but also have such a wide tolerance to assembly errors, tracking, temperature and spectral variations, that the energy generated by them throughout the year is maximized. One of the first addressed issues is the design of secondary optical elements whose primary optics is a Fresnel lens and which, for a fixed concentration, allow an increased acceptance angle and tolerance of the system. Several reflective and refractive secondaries have been designed and analyzed using ray tracing. In particular, using nonimaging optics and based on the single-stage design known as ‘dielectric totally internally reflecting concentrator’, a secondary with square output has been designed, fabricated and characterized. Used together with a Fresnel lens, the secondary can simultaneously achieve high efficiency, concentration and acceptance. Furthermore, an alternative method has been proposed and prototyped for the fabrication of the secondary named dome. The optics is manufactured by direct overmolding of silicone over the solar cells. One characteristic that permeates all the work done in this thesis is the holistic approach in the design of CPV modules, meaning that special attention has been paid to the joint design of the solar cell and the optics to ensure that the total system achieves the highest attainable efficiency. In this regard, many optical systems developed in the thesis have been designed, characterized and optimized considering that the current matching among the subcells within the multijunction solar cell beneath the optics must be close to one. Antireflective coating over the cell acts, somehow, as an interface between the optics and the cell. Consequently, a method has been designed to optimize antireflective coatings that takes into account not only the broad wavelength range that multijunction solar cells are sensitive to but also the angular intensity distribution created by the concentrating optics. In addition, the issue of non-uniformity has also been addressed by comparing the spectral and spatial distributions of irradiance created by different optics (simulated by ray tracing and photographed) and the efficiency losses experienced by cells illuminated by those concentrating optics experimentally determined. The effect of temperature on the concentrating optics has also been studied in this thesis. In particular, finite element simulations have been use to analyze the deformations experienced by the facets of hybrid (silicon-glass) Fresnel lenses, the change of refractive index with temperature and the influence of both effects on the system performance. A model has been implemented which take into consideration atmospheric variations, mainly temperature and spectral content of the direct normal irradiance, as well as thermal and spectral sensitivities of systems, with the aim of maximizing the energy harvested by a CPV module throughout the year in a particular location. Chapters 5 and 6 of this book are devoted to the design, fabrication, and characterization of a new concentrator concept named FluidReflex and based on a single-stage reflective optics with fluid dielectric. In this new concept, the presence of the fluid provides some significant advantages such as: an increased concentration acceptance angle product (CAP) achievable by surrounding the cell with a medium whose refractive index is greater than one, an improvement of the optical efficiency by reducing losses due to Fresnel reflection at several interfaces, an improvement in heat dissipation as the heat concentrated near the cell is transmitted by natural convection and conduction in the fluid, and an improved electrical insulation. By fabricating and characterizing several elementary-unit prototypes it was shown that there is no fundamental reason that prevents the practical implementation of this theoretical concept reaching high efficiency. Several fluid candidates were investigated proving the existence of at least to fluids that meet all the requirements (including the stability under concentrated light) to become part of the FluidReflex concentrator. Finally, several pre-industrial FluidReflex module prototypes have been designed and fabricated. An optimization process for the manufacturing of the multicavity optics was necessary to attain such an optics quality as the one achieved by the single unit. The module prototypes have been measured, both indoors and outdoors, analyzing the current matching of the solar cells beneath the concentrator for different spectral distribution of the incident irradiance. Additionally, the module showed an excellent thermal performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un caloducto en bucle cerrado o Loop Heat Pipe (LHP) es un dispositivo de transferencia de calor cuyo principio de operación se basa en la evaporación/condensación de un fluido de trabajo, que es bombeado a través de un circuito cerrado gracias a fuerzas de capilaridad. Gracias a su flexibilidad, su baja masa y su mínimo (incluso nulo) consumo de potencia, su principal aplicación ha sido identificada como parte del subsistema de control térmico de vehículos espaciales. En el presente trabajo se ha desarrollado un LHP capaz de funcionar eficientemente a temperaturas de hasta 125 oC, siguiendo la actual tendencia de los equipos a bordo de satélites de incrementar su temperatura de operación. En la selección del diseño optimo para dicho LHP, la compatibilidad entre materiales y fluido de trabajo se identificó como uno de los puntos clave. Para seleccionar la mejor combinación, se llevó a cabo una exhaustiva revisión del estado del arte, además de un estudio especifico que incluía el desarrollo de un banco de ensayos de compatibilidad. Como conclusión, la combinación seleccionada como la candidata idónea para ser integrada en el LHP capaz de operar hasta 125 oC fue un evaporador de acero inoxidable, líneas de titanio y amoniaco como fluido de trabajo. En esa línea se diseñó y fabricó un prototipo para ensayos y se desarrolló un modelo de simulación con EcosimPro para evaluar sus prestaciones. Se concluyó que el diseño era adecuado para el rango de operación definido. La incompatibilidad entre el fluido de trabajo y los materiales del LHP está ligada a la generación de gases no condensables. Para un estudio más detallado de los efectos de dichos gases en el funcionamiento del LHP se analizó su comportamiento con diferentes cantidades de nitrógeno inyectadas en su cámara de compensación, simulando un gas no condensable formado en el interior del dispositivo. El estudio se basó en el análisis de las temperaturas medidas experimentalmente a distintos niveles de potencia y temperatura de sumidero o fuente fría. Adicionalmente, dichos resultados se compararon con las predicciones obtenidas por medio del modelo en EcosimPro. Las principales conclusiones obtenidas fueron dos. La primera indica que una cantidad de gas no condensable más de dos veces mayor que la cantidad generada al final de la vida de un satélite típico de telecomunicaciones (15 años) tiene efectos casi despreciables en el funcionamiento del LHP. La segunda es que el principal efecto del gas no condensable es una disminución de la conductancia térmica, especialmente a bajas potencias y temperaturas de sumidero. El efecto es más significativo cuanto mayor es la cantidad de gas añadida. Asimismo, durante la campaña de ensayos se observó un fenómeno no esperado para grandes cantidades de gas no condensable. Dicho fenómeno consiste en un comportamiento oscilatorio, detectado tanto en los ensayos como en la simulación. Este efecto es susceptible de una investigación más profunda y los resultados obtenidos pueden constituir la base para dicha tarea. ABSTRACT Loop Heat Pipes (LHPs) are heat transfer devices whose operating principle is based on the evaporation/condensation of a working fluid, and which use capillary pumping forces to ensure the fluid circulation. Thanks to their flexibility, low mass and minimum (even null) power consumption, their main application has been identified as part of the thermal control subsystem in spacecraft. In the present work, an LHP able to operate efficiently up to 125 oC has been developed, which is in line with the current tendency of satellite on-board equipment to increase their operating temperatures. In selecting the optimal LHP design for the elevated temperature application, the compatibility between the materials and working fluid has been identified as one of the main drivers. An extensive literature review and a dedicated trade-off were performed, in order to select the optimal combination of fluids and materials for the LHP. The trade-off included the development of a dedicated compatibility test stand. In conclusion, the combination of stainless steel evaporator, titanium piping and ammonia as working fluid was selected as the best candidate to operate up to 125 oC. An LHP prototype was designed and manufactured and a simulation model in EcosimPro was developed to evaluate its performance. The first conclusion was that the defined LHP was suitable for the defined operational range. Incompatibility between the working fluid and LHP materials is linked to Non Condensable Gas (NCG) generation. Therefore, the behaviour of the LHP developed with different amounts of nitrogen injected in its compensation chamber to simulate NCG generation, was analyzed. The LHP performance was studied by analysis of the test results at different temperatures and power levels. The test results were also compared to simulations in EcosimPro. Two additional conclusions can be drawn: (i) the effects of an amount of more than two times the expected NCG at the end of life of a typical telecommunications satellite (15 years) is almost negligible on the LHP operation, and (ii) the main effect of the NCG is a decrease in the LHP thermal conductance, especially at low temperatures and low power levels. This decrease is more significant with the progressive addition of NCG. An unexpected phenomenon was observed in the LHP operation with large NCG amounts. Namely, an oscillatory behaviour, which was observed both in the tests and the simulation. This effect provides the basis for further studies concerning oscillations in LHPs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multilayered, counterflow, parallel-plate heat exchangers are analyzed numerically and theoretically. The analysis, carried out for constant property fluids, considers a hydrodynamically developed laminar flow and neglects longitudinal conduction both in the fluid and in the plates. The solution for the temperature field involves eigenfunction expansions that can be solved in terms of Whittaker functions using standard symbolic algebra packages, leading to analytical expressions that provide the eigenvalues numerically. It is seen that the approximate solution obtained by retaining the first two modes in the eigenfunction expansion provides an accurate representation for the temperature away from the entrance regions, specially for long heat exchangers, thereby enabling simplified expressions for the wall and bulk temperatures, local heat-transfer rate, overall heat-transfer coefficient, and outlet bulk temperatures. The agreement between the numerical and theoretical results suggests the possibility of using the analytical solutions presented herein as benchmark problems for computational heat-transfer codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation a new numerical method for solving Fluid-Structure Interaction (FSI) problems in a Lagrangian framework is developed, where solids of different constitutive laws can suffer very large deformations and fluids are considered to be newtonian and incompressible. For that, we first introduce a meshless discretization based on local maximum-entropy interpolants. This allows to discretize a spatial domain with no need of tessellation, avoiding the mesh limitations. Later, the Stokes flow problem is studied. The Galerkin meshless method based on a max-ent scheme for this problem suffers from instabilities, and therefore stabilization techniques are discussed and analyzed. An unconditionally stable method is finally formulated based on a Douglas-Wang stabilization. Then, a Langrangian expression for fluid mechanics is derived. This allows us to establish a common framework for fluid and solid domains, such that interaction can be naturally accounted. The resulting equations are also in the need of stabilization, what is corrected with an analogous technique as for the Stokes problem. The fully Lagrangian framework for fluid/solid interaction is completed with simple point-to-point and point-to-surface contact algorithms. The method is finally validated, and some numerical examples show the potential scope of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluid flow and fabric compaction during vacuum assisted resin infusion (VARI) of composite materials was simulated using a level set-based approach. Fluid infusion through the fiber preform was modeled using Darcy’s equations for the fluid flow through a porous media. The stress partition between the fluid and the fiber bed was included by means of Terzaghi’s effective stress theory. Tracking the fluid front during infusion was introduced by means of the level set method. The resulting partial differential equations for the fluid infusion and the evolution of flow front were discretized and solved approximately using the finite differences method with a uniform grid discretization of the spatial domain. The model results were validated against uniaxial VARI experiments through an [0]8 E-glass plain woven preform. The physical parameters of the model were also independently measured. The model results (in terms of the fabric thickness, pressure and fluid front evolution during filling) were in good agreement with the numerical simulations, showing the potential of the level set method to simulate resin infusion

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we use large eddy simulations (LES) and Lagrangian tracking to study the influence of gravity on particle statistics in a fully developed turbulent upward/downward flow in a vertical channel and pipe at matched Kàrmàn number. Only drag and gravity are considered in the equation of motion for solid particles, which are assumed to have no influence on the flow field. Particle interactions with the wall are fully elastic. Our findings obtained from the particle statistics confirm that: (i) the gravity seems to modify both the quantitative and qualitative behavior of the particle distribution and statistics of the particle velocity in wall normal direction; (ii) however, only the quantitative behavior of velocity particle in streamwise direction and the root mean square of velocity components is modified; (iii) the statistics of fluid and particles coincide very well near the wall in channel and pipe flow with equal Kàrmàn number; (iv) pipe curvature seems to have quantitative and qualitative influence on the particle velocity and on the particle concentration in wall normal direction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comparación de los esquemas de integración temporal explícito e implícito, en la simulación del flujo sanguíneo y su interacción con la pared arterial. There are two major strategies in FSI coupling techniques: implicit and explicit. The general difference between these methodologies is how many times the data is exchanged between the fluid and solid domains at each FSI time-step. In both coupling strategies, the pressure values coming from fluid domain calculations at each time-step are exported to the solid domain, and consequently, the solid domain is analyzed with these imported forces. In contrast to the explicit coupling, in the implicit approach the fluid and solid domain’s data is exchanged several times until the convergence is achieved. Although this method may boost the numerical stabilization, it increases the computational cost due to the extra data exchanges. In cardiovascular simulations, depending on the analysis objectives, one may choose an explicit or implicit approach. In the current work, the advantage of an explicit coupling strategy is highlighted when simulation of pulsatile blood flow in elastic arteries is desired.