906 resultados para Thermoelectric power plants


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increase in environmental and healthy concerns, combined with the possibility to exploit waste as a valuable energy resource, has led to explore alternative methods for waste final disposal. In this context, the energy conversion of Municipal Solid Waste (MSW) in Waste-To-Energy (WTE) power plant is increasing throughout Europe, both in terms of plants number and capacity, furthered by legislative directives. Due to the heterogeneous nature of waste, some differences with respect to a conventional fossil fuel power plant have to be considered in the energy conversion process. In fact, as a consequence of the well-known corrosion problems, the thermodynamic efficiency of WTE power plants typically ranging in the interval 25% ÷ 30%. The new Waste Framework Directive 2008/98/EC promotes production of energy from waste introducing an energy efficiency criteria (the so-called “R1 formula”) to evaluate plant recovery status. The aim of the Directive is to drive WTE facilities to maximize energy recovery and utilization of waste heat, in order to substitute energy produced with conventional fossil fuels fired power plants. This calls for novel approaches and possibilities to maximize the conversion of MSW into energy. In particular, the idea of an integrated configuration made up of a WTE and a Gas Turbine (GT) originates, driven by the desire to eliminate or, at least, mitigate limitations affecting the WTE conversion process bounding the thermodynamic efficiency of the cycle. The aim of this Ph.D thesis is to investigate, from a thermodynamic point of view, the integrated WTE-GT system sharing the steam cycle, sharing the flue gas paths or combining both ways. The carried out analysis investigates and defines the logic governing plants match in terms of steam production and steam turbine power output as function of the thermal powers introduced.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il presente lavoro trae origine dagli obiettivi e dalle relative misure applicative della riforma dell’OCM zucchero del 2006 e nello specifico dal Piano nazionale per la razionalizzazione e riconversione della produzione bieticolo-saccarifera approvato dal MIPAF nel 2007. Lo studio riguarda la riconversione dello zuccherificio di Finale Emilia (MO), di appartenenza del Gruppo bieticolo-saccarifero Co.Pro.B, in un impianto di generazione di energia elettrica e termica che utilizza biomassa di origine agricola per la combustione diretta. L'alimentazione avviene principalmente dalla coltivazione dedicata del sorgo da fibra (Sorghum bicolor), integrata con risorse agro-forestali. Lo studio mostra la necessità di coltivazione di 4.400 ettari di sorgo da fibra con una produzione annua di circa 97.000 t di prodotto al 75% di sostanza secca necessari per l’alimentazione della centrale a biomassa. L’obiettivo é quello di valutare l’impatto della nuova coltura energetica sul comprensorio agricolo e sulla economia dell’impresa agricola. La metodologia adottata si basa sulla simulazione di modelli aziendali di programmazione lineare che prevedono l’inserimento del sorgo da fibra come coltura energetica nel piano ottimo delle aziende considerate. I modelli predisposti sono stati calibrati su aziende RICA al fine di riprodurre riparti medi reali su tre tipologie dimensionali rappresentative: azienda piccola entro i 20 ha, media da 20 a 50 ha e grande oltre i 50 ha. La superficie di entrata a livello aziendale, se rapportata alla rappresentatività delle aziende dell’area di studio, risulta insufficiente per soddisfare la richiesta di approvvigionamento dell’impianto a biomassa. Infatti con tale incremento la superficie di coltivazione nel comprensorio si attesta sui 2.500 ettari circa contro i 4.400 necessari alla centrale. Lo studio mostra pertanto che occorre un incentivo superiore, di circa 80-90 €/ha, per soddisfare la richiesta della superficie colturale a livello di territorio. A questi livelli, la disponibilità della coltura energetica sul comprensorio risulta circa 9.500 ettari.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’evoluzione dei componenti elettronici di potenza ed il conseguente sviluppo dei convertitori statici dell’energia elettrica hanno consentito di ottenere un’elevata efficienza energetica, sia nell’ambito degli azionamenti elettrici, sia nell’ambito della trasmissione e distribuzione dell’energia elettrica. L’efficienza energetica è una questione molto importante nell’attuale contesto storico, in quanto si sta facendo fronte ad una elevatissima richiesta di energia, sfruttando prevalentemente fonti di energia non rinnovabili. L’introduzione dei convertitori statici ha reso possibile un notevolissimo incremento dello sfruttamento delle fonti di energia rinnovabili: si pensi ad esempio agli inverter per impianti fotovoltaici o ai convertitori back to back per applicazioni eoliche. All’aumentare della potenza di un convertitore aumenta la sua tensione di esercizio: le limitazioni della tensione sopportabile dagli IGBT, che sono i componenti elettronici di potenza di più largo impiego nei convertitori statici, rendono necessarie modifiche strutturali per i convertitori nei casi in cui la tensione superi determinati valori. Tipicamente in media ed alta tensione si impiegano strutture multilivello. Esistono più tipi di configurazioni multilivello: nel presente lavoro è stato fatto un confronto tra le varie strutture esistenti e sono state valutate le possibilità offerte dall’architettura innovativa Modular Multilevel Converter, nota come MMC. Attualmente le strutture più diffuse sono la Diode Clamped e la Cascaded. La prima non è modulare, in quanto richiede un’apposita progettazione in relazione al numero di livelli di tensione. La seconda è modulare, ma richiede alimentazioni separate e indipendenti per ogni modulo. La struttura MMC è modulare e necessita di un’unica alimentazione per il bus DC, ma la presenza dei condensatori richiede particolare attenzione in fase di progettazione della tecnica di controllo, analogamente al caso del Diode Clamped. Un esempio di possibile utilizzo del convertitore MMC riguarda le trasmissioni HVDC, alle quali si sta dedicando un crescente interesse negli ultimi anni.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Beside the traditional paradigm of "centralized" power generation, a new concept of "distributed" generation is emerging, in which the same user becomes pro-sumer. During this transition, the Energy Storage Systems (ESS) can provide multiple services and features, which are necessary for a higher quality of the electrical system and for the optimization of non-programmable Renewable Energy Source (RES) power plants. A ESS prototype was designed, developed and integrated into a renewable energy production system in order to create a smart microgrid and consequently manage in an efficient and intelligent way the energy flow as a function of the power demand. The produced energy can be introduced into the grid, supplied to the load directly or stored in batteries. The microgrid is composed by a 7 kW wind turbine (WT) and a 17 kW photovoltaic (PV) plant are part of. The load is given by electrical utilities of a cheese factory. The ESS is composed by the following two subsystems, a Battery Energy Storage System (BESS) and a Power Control System (PCS). With the aim of sizing the ESS, a Remote Grid Analyzer (RGA) was designed, realized and connected to the wind turbine, photovoltaic plant and the switchboard. Afterwards, different electrochemical storage technologies were studied, and taking into account the load requirements present in the cheese factory, the most suitable solution was identified in the high temperatures salt Na-NiCl2 battery technology. The data acquisition from all electrical utilities provided a detailed load analysis, indicating the optimal storage size equal to a 30 kW battery system. Moreover a container was designed and realized to locate the BESS and PCS, meeting all the requirements and safety conditions. Furthermore, a smart control system was implemented in order to handle the different applications of the ESS, such as peak shaving or load levelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Der zunehmende Anteil von Strom aus erneuerbaren Energiequellen erfordert ein dynamisches Konzept, um Spitzenlastzeiten und Versorgungslücken aus der Wind- und Solarenergie ausgleichen zu können. Biogasanlagen können aufgrund ihrer hohen energetischen Verfügbarkeit und der Speicherbarkeit von Biogas eine flexible Energiebereitstellung ermöglichen und darüber hinaus über ein „Power-to-Gas“-Verfahren bei einem kurzzeitigen Überschuss von Strom eine Überlastung des Stromnetzes verhindern. Ein nachfrageorientierter Betrieb von Biogasanlagen stellt jedoch hohe Anforderungen an die Mikrobiologie im Reaktor, die sich an die häufig wechselnden Prozessbedingungen wie der Raumbelastung im Reaktor anpassen muss. Eine Überwachung des Fermentationsprozesses in Echtzeit ist daher unabdingbar, um Störungen in den mikrobiellen Gärungswegen frühzeitig erkennen und adäquat entgegenwirken zu können. rnBisherige mikrobielle Populationsanalysen beschränken sich auf aufwendige, molekularbiologische Untersuchungen des Gärsubstrates, deren Ergebnisse dem Betreiber daher nur zeitversetzt zur Verfügung stehen. Im Rahmen dieser Arbeit wurde erstmalig ein Laser-Absorptionsspektrometer zur kontinuierlichen Messung der Kohlenstoff-Isotopenverhältnisse des Methans an einer Forschungsbiogasanlage erprobt. Dabei konnten, in Abhängigkeit der Raumbelastung und Prozessbedingungen variierende Isotopenverhältnisse gemessen werden. Anhand von Isolaten aus dem untersuchten Reaktor konnte zunächst gezeigt werden, dass für jeden Methanogenesepfad (hydrogeno-troph, aceto¬klastisch sowie methylotroph) eine charakteristische, natürliche Isotopensignatur im Biogas nachgewiesen werden kann, sodass eine Identifizierung der aktuell dominierenden methanogenen Reaktionen anhand der Isotopen-verhältnisse im Biogas möglich ist. rnDurch den Einsatz von 13C- und 2H-isotopen¬markierten Substraten in Rein- und Mischkulturen und Batchreaktoren, sowie HPLC- und GC-Unter¬suchungen der Stoffwechselprodukte konnten einige bislang unbekannte C-Flüsse in Bioreaktoren festgestellt werden, die sich wiederum auf die gemessenen Isotopenverhältnisse im Biogas auswirken können. So konnte die Entstehung von Methanol sowie dessen mikrobieller Abbauprodukte bis zur finalen CH4-Bildung anhand von fünf Isolaten erstmalig in einer landwirtschaftlichen Biogasanlage rekonstruiert und das Vorkommen methylotropher Methanogenesewege nachgewiesen werden. Mithilfe molekularbiologischer Methoden wurden darüber hinaus methanoxidierende Bakterien zahlreicher, unbekannter Arten im Reaktor detektiert, deren Vorkommen aufgrund des geringen O2-Gehaltes in Biogasanlagen bislang nicht erwartet wurde. rnDurch die Konstruktion eines synthetischen DNA-Stranges mit den Bindesequenzen für elf spezifische Primerpaare konnte eine neue Methode etabliert werden, anhand derer eine Vielzahl mikrobieller Zielorganismen durch die Verwendung eines einheitlichen Kopienstandards in einer real-time PCR quantifiziert werden können. Eine über 70 Tage durchgeführte, wöchentliche qPCR-Analyse von Fermenterproben zeigte, dass die Isotopenverhältnisse im Biogas signifikant von der Zusammensetzung der Reaktormikrobiota beeinflusst sind. Neben den aktuell dominierenden Methanogenesewegen war es auch möglich, einige bakterielle Reaktionen wie eine syntrophe Acetatoxidation, Acetogenese oder Sulfatreduktion anhand der δ13C (CH4)-Werte zu identifizieren, sodass das hohe Potential einer kontinuierlichen Isotopenmessung zur Prozessanalytik in Biogasanlagen aufgezeigt werden konnte.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large parts of the world are subjected to one or more natural hazards, such as earthquakes, tsunamis, landslides, tropical storms (hurricanes, cyclones and typhoons), costal inundation and flooding. Virtually the entire world is at risk of man-made hazards. In recent decades, rapid population growth and economic development in hazard-prone areas have greatly increased the potential of multiple hazards to cause damage and destruction of buildings, bridges, power plants, and other infrastructure; thus posing a grave danger to the community and disruption of economic and societal activities. Although an individual hazard is significant in many parts of the United States (U.S.), in certain areas more than one hazard may pose a threat to the constructed environment. In such areas, structural design and construction practices should address multiple hazards in an integrated manner to achieve structural performance that is consistent with owner expectations and general societal objectives. The growing interest and importance of multiple-hazard engineering has been recognized recently. This has spurred the evolution of multiple-hazard risk-assessment frameworks and development of design approaches which have paved way for future research towards sustainable construction of new and improved structures and retrofitting of the existing structures. This report provides a review of literature and the current state of practice for assessment, design and mitigation of the impact of multiple hazards on structural infrastructure. It also presents an overview of future research needs related to multiple-hazard performance of constructed facilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 2009 and 2010 a study was conducted on the Hiawatha National Forest (HNF) to determine if whole-tree harvest (WTH) of jack pine would deplete the soil nutrients in the very coarse-textured Rubicon soil. WTH is restricted on Rubicon sand in order to preserve the soil fertility, but the increasing construction of biomass-fueled power plants is expected to increase the demand for forest biomass. The specific objectives of this study were to estimate biomass and nutrient content of above- and below-ground tree components in mature jack pine (Pinus banksiana) stands growing on a coarse-textured, low-productivity soil, determine pools of total C and N and exchangeable soil cations in Rubicon sand, and to compare the possible impacts of conventional stem-only harvest (CH) and WTH on soil nutrient pools and the implications for productivity of subsequent rotations. Four even-aged jack pine stands on Rubicon soil were studied. Allometric equations were used to estimate above-ground biomass and nutrients, and soil samples from each stand were taken for physical and chemical analysis. Results indicate that WTH will result in cation deficits in all stands, with exceptionally large Ca deficits occurring in two stands. Where a deficit does not occur, the cation surplus is small and, chemical weathering and atmospheric deposition is not anticipated to replace the removed cations. CH will result in a surplus of cations, and will likely not result in productivity declines during the first rotation. However even under CH, the surplus is small, and chemical weathering and atmospheric deposition will not supply enough cations for the second rotation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Excessive Cladophora growth in the Great Lakes has led to beach fouling and the temporary closure of nuclear power plants and has been associated with avian botulism and the persistence of human pathogens. As the growth-limiting nutrient for Cladophora, phosphorus is the appropriate target for management efforts. Dreissenids (zebra and quagga mussels) have the ability to capture particulate phase phosphorus (otherwise unavailable to Cladophora) and release it in a soluble, available form. The significance of this potential nutrient source is, in part, influenced by the interplay between phosphorus flux from the mussel bed and turbulent mixing in establishing the phosphorus levels to which Cladophora is exposed. It is hypothesized that under quiescent conditions phosphorus will accumulate near the sediment-water interface, setting up vertical phosphorus gradients and favorable conditions for resource delivery to Cladophora. These gradients would be eliminated under conditions of wind mixing, reducing the significance of the dreissenid-mediated nutrient contribution. Soluble reactive phosphorus (SRP) levels were monitored over dreissenid beds (densities on the order of 350•m-2 and 3000∙m-2) at a site 8 m deep in Lake Michigan. Monitoring was based on the deployment of Modified Hesslein Samplers which collected samples for SRP analysis over a distance of 34 cm above the bottom in 2.5 cm intervals. Deployment intervals were established to capture a wind regime (calm, windy) that persisted for an interval consistent with the sampler equilibration time of 7 hours. Results indicate that increased mussel density leads to an increased concentration boundary layer; increased wind speed leads to entrainment of the concentration boundary layer; and increased duration of quiescent periods leads to an increased concentration boundary layer. This concentration boundary layer is of ecological significance and forms in the region inhabited by Cladophora

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work described in this thesis had two objectives. The first objective was to develop a physically based computational model that could be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. The second objective was to determine how the secondary phase inclusions observed in Pb1-xSnxTe alloys made by consolidating mechanically alloyed elemental powders impact the ability of the material to harvest waste heat and generate electricity in the 400 K to 700 K temperature range. The motivation for this work was that though the promise of this alloy as an unusually efficient thermoelectric power generator material in the 400 K to 700 K range had been demonstrated in the literature, methods to reproducibly control and subsequently optimize the materials thermoelectric figure of merit remain elusive. Mechanical alloying, though not typically used to fabricate these alloys, is a potential method for cost-effectively engineering these properties. Given that there are deviations from crystalline perfection in mechanically alloyed material such as secondary phase inclusions, the question arises as to whether these defects are detrimental to thermoelectric function or alternatively, whether they enhance thermoelectric function of the alloy. The hypothesis formed at the onset of this work was that the small secondary phase SnO2 inclusions observed to be present in the mechanically alloyed Pb1-xSnxTe would increase the thermoelectric figure of merit of the material over the temperature range of interest. It was proposed that the increase in the figure of merit would arise because the inclusions in the material would not reduce the electrical conductivity to as great an extent as the thermal conductivity. If this were to be true, then the experimentally measured electronic conductivity in mechanically alloyed Pb1-xSnxTe alloys that have these inclusions would not be less than that expected in alloys without these inclusions while the portion of the thermal conductivity that is not due to charge carriers (the lattice thermal conductivity) would be less than what would be expected from alloys that do not have these inclusions. Furthermore, it would be possible to approximate the observed changes in the electrical and thermal transport properties using existing physical models for the scattering of electrons and phonons by small inclusions. The approach taken to investigate this hypothesis was to first experimentally characterize the mobile carrier concentration at room temperature along with the extent and type of secondary phase inclusions present in a series of three mechanically alloyed Pb1-xSnxTe alloys with different Sn content. Second, the physically based computational model was developed. This model was used to determine what the electronic conductivity, Seebeck coefficient, total thermal conductivity, and the portion of the thermal conductivity not due to mobile charge carriers would be in these particular Pb1-xSnxTe alloys if there were to be no secondary phase inclusions. Third, the electronic conductivity, Seebeck coefficient and total thermal conductivity was experimentally measured for these three alloys with inclusions present at elevated temperatures. The model predictions for electrical conductivity and Seebeck coefficient were directly compared to the experimental elevated temperature electrical transport measurements. The computational model was then used to extract the lattice thermal conductivity from the experimentally measured total thermal conductivity. This lattice thermal conductivity was then compared to what would be expected from the alloys in the absence of secondary phase inclusions. Secondary phase inclusions were determined by X-ray diffraction analysis to be present in all three alloys to a varying extent. The inclusions were found not to significantly degrade electrical conductivity at temperatures above ~ 400 K in these alloys, though they do dramatically impact electronic mobility at room temperature. It is shown that, at temperatures above ~ 400 K, electrons are scattered predominantly by optical and acoustical phonons rather than by an alloy scattering mechanism or the inclusions. The experimental electrical conductivity and Seebeck coefficient data at elevated temperatures were found to be within ~ 10 % of what would be expected for material without inclusions. The inclusions were not found to reduce the lattice thermal conductivity at elevated temperatures. The experimentally measured thermal conductivity data was found to be consistent with the lattice thermal conductivity that would arise due to two scattering processes: Phonon phonon scattering (Umklapp scattering) and the scattering of phonons by the disorder induced by the formation of a PbTe-SnTe solid solution (alloy scattering). As opposed to the case in electrical transport, the alloy scattering mechanism in thermal transport is shown to be a significant contributor to the total thermal resistance. An estimation of the extent to which the mean free time between phonon scattering events would be reduced due to the presence of the inclusions is consistent with the above analysis of the experimental data. The first important result of this work was the development of an experimentally validated, physically based computational model that can be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. This model will be critical in future work as a tool to first determine what the highest thermoelectric figure of merit one can expect from this alloy system at a given temperature and, second, as a tool to determine the optimum Sn content and doping level to achieve this figure of merit. The second important result of this work is the determination that the secondary phase inclusions that were observed to be present in the Pb1-xSnxTe made by mechanical alloying do not keep the material from having the same electrical and thermal transport that would be expected from “perfect" single crystal material at elevated temperatures. The analytical approach described in this work will be critical in future investigations to predict how changing the size, type, and volume fraction of secondary phase inclusions can be used to impact thermal and electrical transport in this materials system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stubacher Sonnblickkees (SSK) is located in the Hohe Tauern Range (Eastern Alps) in the south of Salzburg Province (Austria) in the region of Oberpinzgau in the upper Stubach Valley. The glacier is situated at the main Alpine crest and faces east, starting at elevations close to 3050 m and in the 1980s terminated at 2500 m a.s.l. It had an area of 1.7 km² at that time, compared with 1 km² in 2013. The glacier type can be classified as a slope glacier, i.e. the relief is covered by a relatively thin ice sheet and there is no regular glacier tongue. The rough subglacial topography makes for a complex shape in the surface topography, with various concave and convex patterns. The main reason for selecting this glacier for mass balance observations (as early as 1963) was to verify on a complex glacier how the mass balance methods and the conclusions - derived during the more or less pioneer phase of glaciological investigations in the 1950s and 1960s - could be applied to the SSK glacier. The decision was influenced by the fact that close to the SSK there was the Rudolfshütte, a hostel of the Austrian Alpine Club (OeAV), newly constructed in the 1950s to replace the old hut dating from 1874. The new Alpenhotel Rudolfshütte, which was run by the Slupetzky family from 1958 to 1970, was the base station for the long-term observation; the cable car to Rudolfshütte, operated by the Austrian Federal Railways (ÖBB), was a logistic advantage. Another factor for choosing SSK as a glaciological research site was the availability of discharge records of the catchment area from the Austrian Federal Railways who had turned the nearby lake Weißsee ('White Lake') - a former natural lake - into a reservoir for their hydroelectric power plants. In terms of regional climatic differences between the Central Alps in Tyrol and those of the Hohe Tauern, the latter experienced significantly higher precipitation , so one could expect new insights in the different response of the two glaciers SSK and Hintereisferner (Ötztal Alps) - where a mass balance series went back to 1952. In 1966 another mass balance series with an additional focus on runoff recordings was initiated at Vernagtfener, near Hintereisferner, by the Commission of the Bavarian Academy of Sciences in Munich. The usual and necessary link to climate and climate change was given by a newly founded weather station (by Heinz and Werner Slupetzky) at the Rudolfshütte in 1961, which ran until 1967. Along with an extension and enlargement to the so-called Alpine Center Rudolfshütte of the OeAV, a climate observatory (suggested by Heinz Slupetzky) has been operating without interruption since 1980 under the responsibility of ZAMG and the Hydrological Service of Salzburg, providing long-term met observations. The weather station is supported by the Berghotel Rudolfshütte (in 2004 the OeAV sold the hotel to a private owner) with accommodation and facilities. Direct yearly mass balance measurements were started in 1963, first for 3 years as part of a thesis project. In 1965 the project was incorporated into the Austrian glacier measurement sites within the International Hydrological Decade (IHD) 1965 - 1974 and was afterwards extended via the International Hydrological Program (IHP) 1975 - 1981. During both periods the main financial support came from the Hydrological Survey of Austria. After 1981 funds were provided by the Hydrological Service of the Federal Government of Salzburg. The research was conducted from 1965 onwards by Heinz Slupetzky from the (former) Department of Geography of the University of Salzburg. These activities received better recognition when the High Alpine Research Station of the University of Salzburg was founded in 1982 and brought in additional funding from the University. With recent changes concerning Rudolfshütte, however, it became unfeasible to keep the research station going. Fortunately, at least the weather station at Rudolfshütte is still operating. In the pioneer years of the mass balance recordings at SSK, the main goal was to understand the influence of the complicated topography on the ablation and accumulation processes. With frequent strong southerly winds (foehn) on the one hand, and precipitation coming in with storms from the north to northwest, the snow drift is an important factor on the undulating glacier surface. This results in less snow cover in convex zones and in more or a maximum accumulation in concave or flat areas. As a consequence of the accentuated topography, certain characteristic ablation and accumulation patterns can be observed during the summer season every year, which have been regularly observed for many decades . The process of snow depletion (Ausaperung) runs through a series of stages (described by the AAR) every year. The sequence of stages until the end of the ablation season depends on the weather conditions in a balance year. One needs a strong negative mass balance year at the beginning of glacier measurements to find out the regularities; 1965, the second year of observation resulted in a very positive mass balance with very little ablation but heavy accumulation. To date it is the year with the absolute maximum positive balance in the entire mass balance series since 1959, probably since 1950. The highly complex ablation patterns required a high number of ablation stakes at the beginning of the research and it took several years to develop a clearer idea of the necessary density of measurement points to ensure high accuracy. A great number of snow pits and probing profiles (and additional measurements at crevasses) were necessary to map the accumulation area/patterns. Mapping the snow depletion, especially at the end of the ablation season, which coincides with the equilibrium line, is one of the main basic data for drawing contour lines of mass balance and to calculate the total mass balance (on a regular-shaped valley glacier there might be an equilibrium line following a contour line of elevation separating the accumulation area and the ablation area, but not at SSK). - An example: in 1969/70, 54 ablation stakes and 22 snow pits were used on the 1.77 km² glacier surface. In the course of the study the consistency of the accumulation and ablation patterns could be used to reduce the number of measurement points. - At the SSK the stratigraphic system, i.e. the natural balance year, is used instead the usual hydrological year. From 1964 to 1981, the yearly mass balance was calculated by direct measurements. Based on these records of 17 years, a regression analysis between the specific net mass balance and the ratio of ablation area to total area (AAR) has been used since then. The basic requirement was mapping the maximum snow depletion at the end of each balance year. There was the advantage of Heinz Slupetzky's detailed local and long-term experience, which ensured homogeneity of the series on individual influences of the mass balance calculations. Verifications took place as often as possible by means of independent geodetic methods, i.e. monoplotting , aerial and terrestrial photogrammetry, more recently also the application of PHOTOMODELLER and laser scans. The semi-direct mass balance determinations used at SSK were tentatively compared with data from periods of mass/volume change, resulting in promising first results on the reliability of the method. In recent years re-analyses of the mass balance series have been conducted by the World Glacier Monitoring Service and will be done at SSK too. - The methods developed at SSK also add to another objective, much discussed in the 1960s within the community, namely to achieve time- and labour-saving methods to ensure continuation of long-term mass balance series. The regression relations were used to extrapolate the mass balance series back to 1959, the maximum depletion could be reconstructed by means of photographs for those years. R. Günther (1982) calculated the mass balance series of SSK back to 1950 by analysing the correlation between meteorological data and the mass balance; he found a high statistical relation between measured and determined mass balance figures for SSK. In spite of the complex glacier topography, interesting empirical experiences were gained from the mass balance data sets, giving a better understanding of the characteristics of the glacier type, mass balance and mass exchange. It turned out that there are distinct relations between the specific net balance, net accumulation (defined as Bc/S) and net ablation (Ba/S) to the AAR, resulting in characteristic so-called 'turnover curves'. The diagram of SSK represents the type of a glacier without a glacier tongue. Between 1964 and 1966, a basic method was developed, starting from the idea that instead of measuring years to cover the range between extreme positive and extreme negative yearly balances one could record the AAR/snow depletion/Ausaperung during one or two summers. The new method was applied on Cathedral Massif Glacier, a cirque glacier with the same area as the Stubacher Sonnblickkees, in British Columbia, Canada. during the summers of 1977 and 1978. It returned exactly the expected relations, e.g. mass turnover curves, as found on SSK. The SSK was mapped several times on a scale of 1:5000 to 1:10000. Length variations have been measured since 1960 within the OeAV glacier length measurement programme. Between 1965 and 1981, there was a mass gain of 10 million cubic metres. With a time lag of 10 years, this resulted in an advance until the mid-1980s. Since 1982 there has been a distinct mass loss of 35 million cubic metres by 2013. In recent years, the glacier has disintegrated faster, forced by the formation of a periglacial lake at the glacier terminus and also by the outcrops of rocks (typical for the slope glacier type), which have accelerated the meltdown. The formation of this lake is well documented. The glacier has retreated by some 600 m since 1981. - Since August 2002, a runoff gauge installed by the Hydrographical Service of Salzburg has recorded the discharge of the main part of SSK at the outlet of the new Unterer Eisboden See. The annual reports - submitted from 1982 on as a contractual obligation to the Hydrological Service of Salzburg - document the ongoing processes on the one hand, and emphasize the mass balance of SSK and outline the climatological reasons, mainly based on the met-data of the observatory Rudolfshütte, on the other. There is an additional focus on estimating the annual water balance in the catchment area of the lake. There are certain preconditions for the water balance equation in the area. Runoff is recorded by the ÖBB power stations, the mass balance of the now approx. 20% glaciated area (mainly the Sonnblickkees) is measured andthe change of the snow and firn patches/the water content is estimated as well as possible. (Nowadays laserscanning and ground radar are available to measure the snow pack). There is a net of three precipitation gauges plus the recordings at Rudolfshütte. The evaporation is of minor importance. The long-term annual mean runoff depth in the catchment area is around 3.000 mm/year. The precipitation gauges have measured deficits between 10% and 35%, on average probably 25% to 30%. That means that the real precipitation in the catchment area Weißsee (at elevations between 2,250 and 3,000 m) is in an order of 3,200 to 3,400 mm a year. The mass balance record of SSK was the first one established in the Hohe Tauern region (and now since the Hohe Tauern National Park was founded in 1983 in Salzburg) and is one of the longest measurement series worldwide. Great efforts are under way to continue the series, to safeguard against interruption and to guarantee a long-term monitoring of the mass balance and volume change of SSK (until the glacier is completely gone, which seems to be realistic in the near future as a result of the ongoing global warming). Heinz Slupetzky, March 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Finding adequate materials to withstand the demanding conditions in the future fusion and fission reactors is a real challenge in the development of these technologies. Structural materials need to sustain high irradiation doses and temperatures that will change the microstructure over time. A better understanding of the changes produced by the irradiation will allow for a better choice of materials, ensuring a safer and reliable future power plants. High-Cr ferritic/martensitic steels head the list of structural materials due to their high resistance to swelling and corrosion. However, it is well known that these alloys present a problem of embrittlement, which could be caused by the presence of defects created by irradiation as these defects act as obstacles for dislocation motion. Therefore, the mechanical response of these materials will depend on the type of defects created during irradiation. In this work, we address a study of the effect Cr concentration has on single interstitial defect formation energies in FeCr alloys.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, computer simulators are becoming basic tools for education and training in many engineering fields. In the nuclear industry, the role of simulation for training of operators of nuclear power plants is also recognized of the utmost relevance. As an example, the International Atomic Energy Agency sponsors the development of nuclear reactor simulators for education, and arranges the supply of such simulation programs. Aware of this, in 2008 Gas Natural Fenosa, a Spanish gas and electric utility that owns and operate nuclear power plants and promotes university education in the nuclear technology field, provided the Department of Nuclear Engineering of Universidad Politécnica de Madrid with the Interactive Graphic Simulator (IGS) of “José Cabrera” (Zorita) nuclear power plant, an industrial facility whose commercial operation ceased definitively in April 2006. It is a state-of-the-art full-scope real-time simulator that was used for training and qualification of the operators of the plant control room, as well as to understand and analyses the plant dynamics, and to develop, qualify and validate its emergency operating procedures.