958 resultados para Renewable electric energy sources
Resumo:
Helium Brayton cycles have been studied as power cycles for both fission and fusion reactors obtaining high thermal efficiency. This paper studies several technological schemes of helium Brayton cycles applied for the HiPER reactor proposal. Since HiPER integrates technologies available at short term, its working conditions results in a very low maximum temperature of the energy sources, something that limits the thermal performance of the cycle. The aim of this work is to analyze the potential of the helium Brayton cycles as power cycles for HiPER. Several helium Brayton cycle configurations have been investigated with the purpose of raising the cycle thermal efficiency under the working conditions of HiPER. The effects of inter-cooling and reheating have specifically been studied. Sensitivity analyses of the key cycle parameters and component performances on the maximum thermal efficiency have also been carried out. The addition of several inter-cooling stages in a helium Brayton cycle has allowed obtaining a maximum thermal efficiency of over 36%, and the inclusion of a reheating process may also yield an added increase of nearly 1 percentage point to reach 37%. These results confirm that helium Brayton cycles are to be considered among the power cycle candidates for HiPER.
Resumo:
Electrodynamic tether thrusters can use the power provided by solar panels to drive a current in the tether and then the Lorentz force to push against the Earth's magnetic field, thereby achieving propulsion without the expenditure of onboard energy sources or propellant. Practical tether propulsion depends critically on being able to extract multiamp electron currents from the ionosphere with relatively short tethers (10 km or less) and reasonably low power. We describe a new anodic design that uses an uninsulated portion of the metallic tether itself to collect electrons. Because of the efficient collection of this type of anode, electrodynamic thrusters for reboost of the International Space Station and for an upper stage capable of orbit raising, lowering, and inclination changes appear to be feasible. Specifically, a 10-km-long bare tether, utilizing 10 kW of the space station power could save most of the propellant required for the station reboost over its 10-year lifetime. The propulsive small expendable deployer system experiment is planned to test the bare-tether design in space in the year 2000 by deploying a 5-km bare aluminum tether from a Delta II upper stage to achieve up to 0.5-N drag thrust, thus deorbiting the stage.
Resumo:
There is an increasing tendency of turning the current power grid, essentially unaware of variations in electricity demand and scattered energy sources, into something capable of bringing a degree of intelligence by using tools strongly related to information and communication technologies, thus turning into the so-called Smart Grid. In fact, it could be considered that the Smart Grid is an extensive smart system that spreads throughout any area where power is required, providing a significant optimization in energy generation, storage and consumption. However, the information that must be treated to accomplish these tasks is challenging both in terms of complexity (semantic features, distributed systems, suitable hardware) and quantity (consumption data, generation data, forecasting functionalities, service reporting), since the different energy beneficiaries are prone to be heterogeneous, as the nature of their own activities is. This paper presents a proposal on how to deal with these issues by using a semantic middleware architecture that integrates different components focused on specific tasks, and how it is used to handle information at every level and satisfy end user requests.
Resumo:
The conference program will cover all areas of environmental and resource economics, ranging from topics prevailing in the general debate, such as climate change, energy sources, water management and ecosystem services evaluation, to more specialized subjects such as biodiversity conservation or persistent organic pollutants. The congress will be held on the Faculty of Economics of the University of Girona, located in Montilivi, a city quarter situated just few minutes from the city center, conveniently connected by bus lines L8 and L11.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Quercus pyrenaica es una especie rebrotadora de raíz intensa e históricamente aprovechada en monte bajo para la obtención de leñas, carbón y pastos. Debido al éxodo rural y a la aparición de nuevas fuentes energéticas, este aprovechamiento fue abandonado en la década de 1970. Desde entonces, las bajas producciones de madera y bellota y el puntisecado de los pies evidencian el generalizado estancamiento de estas masas. Uno de los mayores retos actuales de la selvicultura en el ámbito mediterráneo es encontrar usos alternativos para estos montes abandonados, siendo la conversión a monte alto una de las alternativas preferidas. Se han realizado resalveos de conversión, sin embrago, éstos se aplican sin un conocimiento integral de las causas de la degradación. En esta tesis doctoral, estudiamos un hipotético desequilibrio entre la parte radical y la parte aérea (R:S) de las cepas de rebollo como causa subyacente de su decaimiento. En una parcela experimental, aprovechada al menos desde el siglo XII, se realizaron análisis genéticos a priori para elucidar la estructura genética del rodal, y así estudiar la influencia del tamaño clonal en el funcionamiento de las cepas. Las cepas de mayor tamaño presentaron un menor crecimiento diametral de sus pies, así como mayores tasas de respiración radical, estimadas a partir de flujos internos de CO2 a través del xilema (FT) y de los flujos de CO2 del suelo. Estos resultados sugieren que el desequilibrio R:S aumenta con el tamaño clonal, dado que la eliminación periódica de órganos aéreos, al mismo tiempo que las raíces permanecen intactas, da lugar a un gran desarrollo del sistema radical que consume gran parte de los carbohidratos no estructurales (NSC) en respiración de mantenimiento, comprometiendo así el desarrollo de órganos aéreos. Se excavaron y pesaron dos cepas compuestas por cuatro y ocho pies, las cuales mostraron ratios R:S (0.5 y 1, respectivamente) superiores a los registrados en pies de origen sexual. Al igual que en otras especies rebrotadoras de raíz, se observaron altas concentraciones de NSC en las raíces (> 20% en primavera) y una gran proporción de albura en el sistema radical (52%) que alberga una notable reserva de NSC (87 kg en la cepa de mayor tamaño). En el sistema radical de dicha cepa, estimada mediante dataciones radiocarbónicas en 550 años de edad, se contaron 248 uniones radicales. La persistencia de sistemas radicales grandes, viejos, y altamente interconectados sugiere que la gran cantidad de recursos almacenados y consumidos en las raíces compensan un pobre desarrollo aéreo con una alta resiliencia vegetativa. Para un mejor entendimiento de los balances de carbono y del agotamiento de NSC en las cepas de rebollo, se midieron los flujos internos y externos de CO2 en troncos y los flujos de CO2 del suelo, y se estimó la respiración de órganos aéreos (RS) y subterráneos (RR). Estacionalmente, RS y RR reflejaron las dinámicas de flujo de savia y de crecimiento del tronco, y estuvieron determinadas principalmente por los flujos externos de CO2, dada la escasa contribución de FT a RS y RR (< 10% y < 2%, respectivamente). En una escala circadiana, la contribución de FT a RS aumentó hasta un 25% en momentos de alta transpiración. Las bajas concentraciones de CO2 en el xilema ([CO2] hasta un 0.11%) determinaron comparativamente unos bajos FT, probablemente causados por una limitada respiración del xilema y una baja resistencia a la difusión radial del CO2 impuestos por la sequía estival. Los pulsos de [CO2] observados tras las primeras lluvias de otoño apoyan esta idea. A lo largo del periodo vegetativo, el flujo medio de CO2 procedente del suelo (39 mol CO2 day-1) fue el mayor flujo respiratorio, tres y cuatro veces superior a RS (12 mol CO2 day-1) y RR (8-9 mol CO2 day-1), respectivamente. Ratios RR/RS menores que la unidad evidencian un importante peso de la respiración aérea como sumidero de carbono adicional. Finalmente, se ensayó el zanjado de raíces y el anillamiento de troncos como tratamientos selvícolas alternativos con el objetivo de aumentar las reservas de NSC en los troncos de las cepas. Los resultados preliminares desaconsejan el zanjado de raíces por el alto coste derivado posiblemente de la cicatrización de las heridas. El anillado de troncos imposibilitó el transporte de NSC a las raíces y aumentó la concentración de almidón por encima de la zona anillada, mientras que sistema radical se mantiene por los pies no anillados de la cepa. Son necesarias más mediciones y datos adicionales para comprobar el mantenimiento de esta respuesta positiva a largo plazo. Para concluir, destacamos la necesidad de estudios multidisciplinares que permitan una comprensión integral de la degradación de los rebollares ibéricos para poder aplicar a posteriori una gestión adecuada en estos montes bajos abandonados. ABSTRACT Quercus pyrenaica is a vigorous root-resprouting species intensively and historically coppiced for firewood, charcoal and woody pastures. Due to the rural exodus and the appearance of new energy sources, coppicing was abandoned towards 1970. Since then, tree overaging has resulted in stand stagnation displayed by slow stem growth, branch dieback, and scarce acorn production. The urgent need to find new alternative uses for abandoned coppices is recognized as one of the biggest challenges which currently faces Mediterranean silviculture; conversion into high forest by thinning is one of the preferred alternatives. For this aim, thinning has been broadly applied and seldom tested, although without a comprehensive understanding of the causes of stand stagnation. In this PhD study, we test the hypothesis of an imbalance between above- and below-ground organs, result of long term coppicing, as the underlying cause of Q. pyrenaica decay. In an experimental plot coppiced since at least the 12th century, genetic analyses were performed a priori to elucidate inconspicuous clonal structure of Q. pyrenaica to evaluate how clonal size affects the functioning of these multi-stemmed trees. Clonal size negatively affected diametric stem growth, whereas root respiration rates, measured by internal fluxes of CO2 through xylem (FT) and soil CO2 efflux, increased with clonal size. These results suggest root-to-shoot (R:S) imbalance intensifying with clonal size: periodic removal of aboveground organs whilst belowground organs remain undisturbed may have led to massive root systems which consume a great proportion of non-structural carbohydrates (NSC) for maintenance respiration, thus constraining aboveground performance. Furthermore, excavation of two multi-stemmed trees, composed by four and eight stems, revealed R:S ratios (0.5 and 1, respectively) greater than those reported for sexually regenerated trees. Moreover, as similarly observed in several root-resprouting species, NSC allocation to roots was favored ([NSC] > 20% in spring): a large proportion of sapwood maintained throughout the root system (52%) stored a remarkable NSC pool of 87 kg in the case of the largest clone. In this root system of the eight-stemmed tree, 248 root connections were counted and, by radiocarbon dating, its age was estimated to be 550-years-old. Persistence of massive, old and highly interconnected root systems suggests that enhanced belowground NSC storage and consumption reflects a trade-off between vegetative resilience and aboveground development. For a better understanding of tree carbon budget and the potential role of carbon starvation in Q. pyrenaica decay, internal and external stem CO2 fluxes and soil CO2 effluxes were monitored to evaluate respiratory costs above- and below-ground. On a seasonal scale, stem and root respiration (RS and RR) mirrored sap flow and stem growth dynamics. Respiration was determined to the greatest extent by external fluxes of CO2 to the atmosphere or soil, since FT accounted for a low proportion of RS and RR (< 10% and < 2%, respectively). On a diel scale, the contribution of FT to RS increased up to 25% at high transpiration rates. Comparatively low FT was determined by the low concentration of xylem CO2 registered ([CO2] as low as 0.11%), likely as a consequence of constrained xylem respiration and reduced resistance to CO2 radial diffusion imposed by summer drought. Xylem [CO2] pulses following first autumn rains support this idea. Averaged over the growing season, soil CO2 efflux was the greatest respiratory flux (39 mol CO2 day-1), three and four times greater than RS (12 mol CO2 day-1) and RR (8-9 mol CO2 day-1), respectively. Ratios of RR/RS below one evidence an additional and important weight of aboveground respiration as a tree carbon sink. Finally, root trenching and stem girdling were tested as complimentary treatments to thinning as a means to improve carbon reserves in stems of clonal trees. Preliminary results discouraged root trenching due to the high cost likely incurred for wound closure. Stem girdling successfully blocked NSC translocation downward, increasing starch concentrations above the girdled zone whilst the root system is fed by non-girdled stems within the clone. Further measurements and ancillary data are necessary to verify that this positive effect hold over time. To conclude, the need of multidisciplinary approaches for an integrative understanding on the functioning of abandoned Q pyrenaica coppices is highlighted for an appropriate management of these stands.
Resumo:
La hipótesis que inspiró esta tesis sostiene que la integración de componentes fotovoltaicos en los cerramientos opacos y sombreamientos de huecos acristalados de edificios de oficinas en sitios ubicados en bajas latitudes, tomando como el ejemplo el caso concreto de Brasil, podría incrementar su eficiencia energética. Esta posibilidad se basa en el bloqueo de una parte significativa de la irradiación solar incidente en estos edificios, reduciendo así las cargas térmicas para la climatización y a la vez transformándola en energía eléctrica, a tal punto que se amortizan los costes de inversión en plazos aceptables a través de los ahorros en la demanda de energía. Para verificar esta hipótesis de partida se ha propuesto como objetivo general analizar la integración de elementos fotovoltaicos en cubiertas, muros opacos y sombreamiento de huecos acristalados desde la óptica del balance energético térmico y eléctrico. Inicialmente se presenta y analiza el estado del arte en los temas estudiados y la metodología de investigación, de carácter teórico basada en cálculos y simulaciones. A partir de un modelo tipo de edificio de oficinas situado en Brasil, se definen cuatro casos de estudio y una serie de parámetros, los cuales se analizan para siete latitudes ubicadas entre -1,4° y -30°, separadas las unas de las otras por aproximadamente 5°. Se presentan y discuten los resultados de más de 500 simulaciones para los siguientes conceptos: - recurso solar, desde la perspectiva de la disponibilidad de irradiación solar en distintas superficies de captación apropiadas para la integración de sistemas solares fotovoltaicos en edificaciones en bajas latitudes; - análisis de sombras, con objetivo de identificar los ángulos de sombras vertical (AVS) para protección de huecos acristalados en edificios de oficinas; - balance energético térmico, para identificar el efecto térmico del apantallamiento provocado por componentes fotovoltaicos en cubiertas, muros opacos y parasoles en ventanas en las cargas de refrigeración y consecuentemente en las demandas de energía eléctrica; - balance energético eléctrico, contrastando los resultados del balance térmico con la energía potencialmente generada en las envolventes arquitectónicas bajo estudio; - análisis económico, basado en un escenario de precios de la tecnología fotovoltaica de un mercado maduro y en la política de inyección a la red marcada por la actual normativa brasileña. Se han verificado los potenciales de ahorro económico que los sistemas activos fotovoltaicos podrían aportar, y asimismo se calculan diversos indicadores de rentabilidad financiera. En suma, esta investigación ha permitido extraer conclusiones que contribuyen al avance de la investigación y entender las condiciones que propician la viabilidad de la aplicación de componentes fotovoltaicas en las envolventes de edificios en Brasil, y hasta un cierto punto en otros países en latitudes equivalentes. ABSTRACT The hypothesis that inspired this thesis sustains that integration of photovoltaic components in the opaque envelope and shading elements of office buildings placed at low-latitude countries, using the specific case of Brazil, could increase its energy efficiency. This is possible because those components block a significant part of the incident solar irradiation, reducing its heating effect on the building and transforming its energy into electricity in such a way that the extra investments needed can be paid back in acceptable periods given the electricity bill savings they produce. In order to check this hypothesis, the main goal was to analyze the thermal and electrical performance of photovoltaic components integrated into roofs, opaque façades and window shadings. The first step is an introduction and discussion of the state of the art in the studied subjects, as well as the chosen methodology (which is theoretical), based on calculations and simulations. Starting from an office building located in Brazil, four case studies and their parameters are defined, and then analyzed, for seven cities located between latitudes -1.4° and -30°, with an approximate distance of 5° separating each one. Results of more than 500 simulations are presented and discussed for the following concepts: - Solar resource, from the perspective of irradiation availability on different surfaces for the integration of photovoltaic systems in buildings located at low latitudes; - Shading analysis, in order to determine the vertical shading angles (VSA) for protection of the glazed surfaces on office buildings; - Thermal energy balance, to identify the screening effect caused by photovoltaic components on roofs, opaque façades and window shadings on the cooling loads, and hence electricity demands; - Electric energy balance, comparing thermal energy balance with the energy potentially generated using the active skin of the buildings; - Economic analysis, based on a mature-market scenario and the current net metering rules established by the Brazilian government, to identify the potential savings these photovoltaic systems could deliver, as well as several indicators related to the return on the investment. In short, this research has led to conclusions that contribute to the further development of knowledge in this area and understanding of the conditions that favor the application of photovoltaic components in the envelope of office buildings in Brazil and, to a certain extent, in other countries at similar latitudes.
Resumo:
Esta tesis doctoral presenta un procedimiento integral de control de calidad en centrales fotovoltaicas, que comprende desde la fase inicial de estimación de las expectativas de producción hasta la vigilancia del funcionamiento de la instalación una vez en operación, y que permite reducir la incertidumbre asociada su comportamiento y aumentar su fiabilidad a largo plazo, optimizando su funcionamiento. La coyuntura de la tecnología fotovoltaica ha evolucionado enormemente en los últimos años, haciendo que las centrales fotovoltaicas sean capaces de producir energía a unos precios totalmente competitivos en relación con otras fuentes de energía. Esto hace que aumente la exigencia sobre el funcionamiento y la fiabilidad de estas instalaciones. Para cumplir con dicha exigencia, es necesaria la adecuación de los procedimientos de control de calidad aplicados, así como el desarrollo de nuevos métodos que deriven en un conocimiento más completo del estado de las centrales, y que permitan mantener la vigilancia sobre las mismas a lo largo del tiempo. Además, los ajustados márgenes de explotación actuales requieren que durante la fase de diseño se disponga de métodos de estimación de la producción que comporten la menor incertidumbre posible. La propuesta de control de calidad presentada en este trabajo parte de protocolos anteriores orientados a la fase de puesta en marcha de una instalación fotovoltaica, y las complementa con métodos aplicables a la fase de operación, prestando especial atención a los principales problemas que aparecen en las centrales a lo largo de su vida útil (puntos calientes, impacto de la suciedad, envejecimiento…). Además, incorpora un protocolo de vigilancia y análisis del funcionamiento de las instalaciones a partir de sus datos de monitorización, que incluye desde la comprobación de la validez de los propios datos registrados hasta la detección y el diagnóstico de fallos, y que permite un conocimiento automatizado y detallado de las plantas. Dicho procedimiento está orientado a facilitar las tareas de operación y mantenimiento, de manera que se garantice una alta disponibilidad de funcionamiento de la instalación. De vuelta a la fase inicial de cálculo de las expectativas de producción, se utilizan los datos registrados en las centrales para llevar a cabo una mejora de los métodos de estimación de la radiación, que es la componente que más incertidumbre añade al proceso de modelado. El desarrollo y la aplicación de este procedimiento de control de calidad se han llevado a cabo en 39 grandes centrales fotovoltaicas, que totalizan una potencia de 250 MW, distribuidas por varios países de Europa y América Latina. ABSTRACT This thesis presents a comprehensive quality control procedure to be applied in photovoltaic plants, which covers from the initial phase of energy production estimation to the monitoring of the installation performance, once it is in operation. This protocol allows reducing the uncertainty associated to the photovoltaic plants behaviour and increases their long term reliability, therefore optimizing their performance. The situation of photovoltaic technology has drastically evolved in recent years, making photovoltaic plants capable of producing energy at fully competitive prices, in relation to other energy sources. This fact increases the requirements on the performance and reliability of these facilities. To meet this demand, it is necessary to adapt the quality control procedures and to develop new methods able to provide a more complete knowledge of the state of health of the plants, and able to maintain surveillance on them over time. In addition, the current meagre margins in which these installations operate require procedures capable of estimating energy production with the lower possible uncertainty during the design phase. The quality control procedure presented in this work starts from previous protocols oriented to the commissioning phase of a photovoltaic system, and complete them with procedures for the operation phase, paying particular attention to the major problems that arise in photovoltaic plants during their lifetime (hot spots, dust impact, ageing...). It also incorporates a protocol to control and analyse the installation performance directly from its monitoring data, which comprises from checking the validity of the recorded data itself to the detection and diagnosis of failures, and which allows an automated and detailed knowledge of the PV plant performance that can be oriented to facilitate the operation and maintenance of the installation, so as to ensure a high operation availability of the system. Back to the initial stage of calculating production expectations, the data recorded in the photovoltaic plants is used to improved methods for estimating the incident irradiation, which is the component that adds more uncertainty to the modelling process. The development and implementation of the presented quality control procedure has been carried out in 39 large photovoltaic plants, with a total power of 250 MW, located in different European and Latin-American countries.
Resumo:
Oxidation of molecular hydrogen catalyzed by [NiFe] hydrogenases is a widespread mechanism of energy generation among prokaryotes. Biosynthesis of the H2-oxidizing enzymes is a complex process subject to positive control by H2 and negative control by organic energy sources. In this report we describe a novel signal transduction system regulating hydrogenase gene (hox) expression in the proteobacterium Alcaligenes eutrophus. This multicomponent system consists of the proteins HoxB, HoxC, HoxJ*, and HoxA. HoxB and HoxC share characteristic features of dimeric [NiFe] hydrogenases and form the putative H2 receptor that interacts directly or indirectly with the histidine protein kinase HoxJ*. A single amino acid substitution (HoxJ*G422S) in a conserved C-terminal glycine-rich motif of HoxJ* resulted in a loss of H2-dependent signal transduction and a concomitant block in autophosphorylating activity, suggesting that autokinase activity is essential for the response to H2. Whereas deletions in hoxB or hoxC abolished hydrogenase synthesis almost completely, the autokinase-deficient strain maintained high-level hox gene expression, indicating that the active sensor kinase exerts a negative effect on hox gene expression in the absence of H2. Substitutions of the conserved phosphoryl acceptor residue Asp55 in the response regulator HoxA (HoxAD55E and HoxAD55N) disrupted the H2 signal-transduction chain. Unlike other NtrC-like regulators, the altered HoxA proteins still allowed high-level transcriptional activation. The data presented here suggest a model in which the nonphosphorylated form of HoxA stimulates transcription in concert with a yet unknown global energy-responsive factor.
Resumo:
A geração de energia hidrelétrica enfrenta uma crescente restrição a sua expansão, diretamente relacionada a fatores ambientais e da limitação de terrenos com potencial economicamente aproveitável. A partir deste fato, é relacionada uma possível fonte de energia hidrelétrica, resultante do aproveitamento dos potenciais presentes na rede de distribuição de água das cidades, ainda sem nenhum aproveitamento. O desenvolvimento desta fonte de energia se dá com a instalação de mini e micro centrais hidrelétricas nos condutos da rede distribuidora de água. Este trabalho tem por objetivo avaliar o potencial de aproveitamento hidrelétrico por mini e micro hidrelétricas por meio de técnicas de modelagem e de otimização, para agilizar e facilitar o procedimento de identificação dos potenciais e a instalação na rede de abastecimento. O trabalho leva em conta as diversas peculiaridades das redes de distribuição de água e dos equipamentos eletro-hidráulicos, discorrendo sobre a possível complementariedade da geração de energia durante os picos de consumo. Discorre também sobre a contribuição para a rede de distribuição elétrica, na logística e nos custos de implantação além de discutir a tipologia das turbinas capazes de aproveitar o potencial energético. É avaliado, com o auxilio de modelos hidráulicos e de otimização, o posicionamento das centrais geradoras na rede e os possíveis benefícios, restrições e impedimentos ao seu uso, desenvolvendo uma metodologia para facilitar a tomada de decisão quanto ao aproveitamento para geração, ou não, em redes diversas. A construção deste procedimento e ferramenta são desenvolvidos a partir do estudo de caso do sistema distribuidor de água do Município de Piquete no estado de São Paulo, Brasil.
Resumo:
No setor de energia elétrica, a área que se dedica ao estudo da inserção de novos parques geradores de energia no sistema é denominada planejamento da expansão da geração. Nesta área, as decisões de localização e instalação de novas usinas devem ser amplamente analisadas, a fim de se obter os diversos cenários proporcionados pelas alternativas geradas. Por uma série de fatores, o sistema de geração elétrico brasileiro, com predominância hidroelétrica, tende a ser gradualmente alterada pela inserção de usinas termoelétricas (UTEs). O problema de localização de UTEs envolve um grande número de variáveis através do qual deve ser possível analisar a importância e contribuição de cada uma. O objetivo geral deste trabalho é o desenvolvimento de um modelo de localização de usinas termoelétricas, aqui denominado SIGTE (Sistema de Informação Geográfica para Geração Termoelétrica), o qual integra as funcionalidades das ferramentas SIGs (Sistemas de Informação Geográfica) e dos métodos de decisão multicritério. A partir de uma visão global da área estudada, as componentes espaciais do problema (localização dos municípios, tipos de transporte, linhas de transmissão de diferentes tensões, áreas de preservação ambiental, etc.) podem ter uma representação mais próxima da realidade e critérios ambientais podem ser incluídos na análise. Além disso, o SIGTE permite a inserção de novas variáveis de decisão sem prejuízo da abordagem. O modelo desenvolvido foi aplicado para a realidade do Estado de São Paulo, mas deixando claro a viabilidade de uso do modelo para outro sistema ou região, com a devida atualização dos bancos de dados correspondentes. Este modelo é designado para auxiliar empreendedores que venham a ter interesse em construir uma usina ou órgãos governamentais que possuem a função de avaliar e deferir ou não a licença de instalação e operação de usinas.
Resumo:
Tese de mestrado integrado, Engenharia da Energia e do Ambiente, Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
In recent weeks, Rosneft, a Russian state-owned oil company, has signed co-operation agreements with three Western corporations: America’s ExxonMobil, Italy’s Eni, and Norway’s Statoil. In exchange for access to Russian oil fields on the continental shelf as minority shareholders, these Western investors will finance and carry out exploration there. They will also offer to Rosnieft technology transfer, staff exchange and the purchase of shares in their assets outside Russia (for example in the North Sea or in South America). Rosneft’s deals with Western energy companies prove that the Russian government is resuming the policy of a controlled opening-up of the Russian energy sectors to foreign investors which it initiated in 2006. So far, investors have been given access to the Russian electric energy sector and some onshore gas fields. The agreements which have been signed so far also allow them to work on the Russian continental shelf. This process is being closely supervised by the Russian government, which has enabled the Kremlin to maintain full control of this sector. The primary goal of this policy is to attract modern technologies and capital to Russia and to gain access to foreign assets since this will help Russian corporations to reinforce their positions in international markets. The signing of the above agreements does not guarantee that production will commence. These are a high-risk projects. It remains uncertain whether crude can be extracted from those fields and whether its development will be cost-effective. According to estimates, the Russian Arctic shelf holds approximately 113 billion tonnes of hydrocarbons. The development of these fields, including building any necessary infrastructure, may consume over US$500 billion within 30 years. Furthermore, the legal regulations currently in force in Russia do not guarantee that foreign investors will have a share in the output from these fields. Without foreign support, Russian companies are unlikely to cope with such technologically complicated and extremely expensive investments. In the most optimistic scenario, the oil production in the Russian Arctic may commence in fifteen to twenty years at the earliest.