907 resultados para Large-scale system


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Urban developments have exerted immense pressure on wetlands. Urban areas are normally centers of commercial activity and continue to attract migrants in large numbers in search of employment from different areas. As a result, habitations keep coming up in the natural areas / flood plains. This is happening in various Indian cities and towns and large habitations are coming up in low-lying areas, often encroaching even over drainage channels. In some cases, houses are constructed even on top of nallahs and drains. In the case of Kochi the situation is even worse as the base of the urban development itself stands on a completely reclaimed island. Also the topography and geology demanded more reclamation of land when the city developed as an agglomerative cluster. Cochin is a coastal settlement interspersed with a large backwater system and fringed on the eastern side by laterite-capped low hills from which a number of streams drain into the backwater system. The ridge line of the eastern low hills provides a welldefined watershed delimiting Cochin basin which help to confine the environmental parameters within a physical limit. This leads to an obvious conclusion that if physiography alone is considered, the western flatland is ideal for urban development. However it will result in serious environmental deterioration, as it comprises mainly of wetland and for availability of land there has to be large scale filling up of these wetlands which includes shallow mangrove-fringed water sheets, paddy fields, Pokkali fields, estuary etc.Chapter 1 School 4 of Environmental Studies The urban boundaries of Cochin are expanding fast with a consequent over-stretching of the existing fabric of basic amenities and services. Urbanisation leads to the transformation of agricultural land into built-up areas with the concomitant problems regarding water supply, drainage, garbage and sewage disposal etc. Many of the environmental problems of Cochin are hydrologic in origin; like water-logging / floods, sedimentation and pollution in the water bodies as well as shoreline erosion

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This is an attempt to understand the important factors that control the occurrence, development and hydrochemical evolution of groundwater resources in sedimentary multi aquifer systems. The primary objective of this work is an integrated study of the hydrogeology and hydrochemistry with a view to elucidate the hydrochemical evolution of groundwater resources in the aquifer systems. The study is taken up in a typical coastal sedimentary aquifer system evolved under fluvio-marine environment in the coastal area of Kerala, known as the Kuttanad. The present study has been carried out to understand the aquifer systems, their inter relationships and evolution in the Kuttanad area of Kerala. The multi aquifer systems in the Kuttanad basin were formed from the sediments deposited under fluvio-marine and fluvial depositional environments and the marine transgressions and regressions in the geological past and palaeo climatic conditions influenced the hydrochemical environment in these aquifers. The evolution of groundwater and the hydrochemical processes involved in the formation of the present day water quality are elucidated from hydrochemical studies and the information derived from the aquifer geometry and hydraulic properties. Kuttanad area comprises of three types of aquifer systems namely phreatic aquifer underlain by Recent confined aquifer followed by Tertiary confined aquifers. These systems were formed by the deposition of sediments under fluvio-marine and fluvial environment. The study of the hydrochemical and hydraulic properties of the three aquifer systems proved that these three systems are separate entities. The phreatic aquifers in the area have low hydraulic gradients and high rejected recharge. The Recent confined aquifer has very poor hydraulic characteristics and recharge to this aquifer is very low. The Tertiary aquifer system is the most potential fresh water aquifer system in the area and the groundwater flow in the aquifer is converging towards the central part of the study area (Alleppey town) due to large scale pumping of water for water supply from this aquifer system. Mixing of waters and anthropogenic interferences are the dominant processes modifying the hydrochemistry in phreatic aquifers. Whereas, leaching of salts and cation exchange are the dominant processes modifying the hydrochemistry of groundwater in the confined aquifer system of Recent alluvium. Two significant chemical reactions modifying the hydrochemistry in the Recent aquifers are oxidation of iron in ferruginous clays which contributes hydrogen ions and the decomposition of organic matter in the aquifer system which consumes hydrogen ions. The hydrochemical environment is entirely different in the Tertiary aquifers as the groundwater in this aquifer system are palaeo waters evolved during various marine transgressions and regressions and these waters are being modified by processes of leaching of salts, cation exchange and chemical reactions under strong reducing environment. It is proved that the salinity observed in the groundwaters of Tertiary aquifers are not due to seawater mixing or intrusion, but due to dissolution of salts from the clay formations and ion exchange processes. Fluoride contamination in this aquifer system lacks a regional pattern and is more or less site specific in natureThe lowering of piezometric heads in the Tertiary aquifer system has developed as consequence of large scale pumping over a long period. Hence, puping from this aquifer system is to be regulated as a groundwater management strategy. Pumping from the Tertiary aquifers with high capacity pumps leads to well failures and mixing of saline water from the brackish zones. Such mixing zones are noticed from the hydrochemical studies. This is the major aquifer contamination in the Tertiary aquifer system which requires immediate attention. Usage of pumps above 10 HP capacities in wells taping Tertiary aquifers should be discouraged for sustainable development of these aquifers. The recharge areas need to be identified precisely for recharging the aquifer systems throughartificial means.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In dieser Arbeit werden verschiedene Computermodelle, Rechenverfahren und Methoden zur Unterstützung bei der Integration großer Windleistungen in die elektrische Energieversorgung entwickelt. Das Rechenmodell zur Simulation der zeitgleich eingespeisten Windenergie erzeugt Summenganglinien von beliebig zusammengestellten Gruppen von Windenergieanlagen, basierend auf gemessenen Wind- und Leistungsdaten der nahen Vergangenheit. Dieses Modell liefert wichtige Basisdaten für die Analyse der Windenergieeinspeisung auch für zukünftige Szenarien. Für die Untersuchung der Auswirkungen von Windenergieeinspeisungen großräumiger Anlagenverbünde im Gigawattbereich werden verschiedene statistische Analysen und anschauliche Darstellungen erarbeitet. Das im Rahmen dieser Arbeit entwickelte Modell zur Berechnung der aktuell eingespeisten Windenergie aus online gemessenen Leistungsdaten repräsentativer Windparks liefert wertvolle Informationen für die Leistungs- und Frequenzregelung der Netzbetreiber. Die zugehörigen Verfahren zur Ermittlung der repräsentativen Standorte und zur Überprüfung der Repräsentativität bilden die Grundlage für eine genaue Abbildung der Windenergieeinspeisung für größere Versorgungsgebiete, basierend auf nur wenigen Leistungsmessungen an Windparks. Ein weiteres wertvolles Werkzeug für die optimale Einbindung der Windenergie in die elektrische Energieversorgung bilden die Prognosemodelle, die die kurz- bis mittelfristig zu erwartende Windenergieeinspeisung ermitteln. In dieser Arbeit werden, aufbauend auf vorangegangenen Forschungsarbeiten, zwei, auf Künstlich Neuronalen Netzen basierende Modelle vorgestellt, die den zeitlichen Verlauf der zu erwarten Windenergie für Netzregionen und Regelzonen mit Hilfe von gemessenen Leistungsdaten oder prognostizierten meteorologischen Parametern zur Verfügung stellen. Die softwaretechnische Zusammenfassung des Modells zur Berechnung der aktuell eingespeisten Windenergie und der Modelle für die Kurzzeit- und Folgetagsprognose bietet eine attraktive Komplettlösung für die Einbindung der Windenergie in die Leitwarten der Netzbetreiber. Die dabei entwickelten Schnittstellen und die modulare Struktur des Programms ermöglichen eine einfache und schnelle Implementierung in beliebige Systemumgebungen. Basierend auf der Leistungsfähigkeit der Online- und Prognosemodelle werden Betriebsführungsstrategien für zu Clustern im Gigawattbereich zusammengefasste Windparks behandelt, die eine nach ökologischen und betriebswirtschaftlichen Gesichtspunkten sowie nach Aspekten der Versorgungssicherheit optimale Einbindung der geplanten Offshore-Windparks ermöglichen sollen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die wachsende Weltbevölkerung bedingt einen höheren Energiebedarf, dies jedoch unter der Beachtung der nachhaltigen Entwicklung. Die derzeitige zentrale Versorgung mit elektrischer Energie wird durch wenige Erzeugungsanlagen auf der Basis von fossilen Primärenergieträgern und Kernenergie bestimmt, die die räumlich verteilten Verbraucher zuverlässig und wirtschaftlich über ein strukturiertes Versorgungssystem beliefert. In den Elektrizitätsversorgungsnetzen sind keine nennenswerten Speicherkapazitäten vorhanden, deshalb muss die von den Verbrauchern angeforderte Energie resp. Leistung jederzeit von den Kraftwerken gedeckt werden. Bedingt durch die Liberalisierung der Energiemärkte und die geforderte Verringerung der Energieabhängigkeit Luxemburgs, unterliegt die Versorgung einem Wandel hin zu mehr Energieeffizienz und erhöhter Nutzung der dargebotsabhängigen Energiequellen. Die Speicherung der aus der Windkraft erzeugten elektrischen Energie, wird in den Hochleistungs-Bleiakkumulatoren, errichtet im ländlichen Raum in der Nähe der Windkraftwerke, eingespeichert. Die zeitversetzte Einspeisung dieser gespeicherten elektrischen Energie in Form von veredelter elektrischer Leistung während den Lastspitzen in das 20 kV-Versorgungsnetz der CEGEDEL stellt die Innovation in der luxemburgischen Elektrizitätsversorgung dar. Die Betrachtungen beschränken sich somit auf die regionale, relativ kleinräumige Einbindung der Windkraft in die elektrische Energieversorgung des Großherzogtums Luxemburg. Die Integration der Windkraft im Regionalbereich wird in den Vordergrund der Untersuchung gerückt. Überregionale Ausgleichseffekte durch Hochspannungsleitungen der 230/400 kV-Systeme werden außer Acht gelassen. Durch die verbrauchernahe Bereitstellung von elektrischer Spitzenleistung vermindern sich ebenfalls die Übertragungskosten aus den entfernten Spitzenlastkraftwerken, der Ausbau von Kraftwerkskapazitäten kann in die Zukunft verschoben werden. Die Emission von Treibhausgasen in thermischen Kraftwerken wird zum Teil reduziert. Die Berechnungen der Wirtschaftlichkeit von Hybridanlagen, zusammengesetzt aus den Windkraftwerken und den Hochleistungs-Bleiakkumulatoren bringen weitere Informationen zum Einsatz dieser dezentralen Speichern, als Partner der nachhaltigen Energieversorgung im ländlichen Raum. Die untersuchte Einspeisung von erneuerbarer Spitzenleistung lässt sich auch in die Entwicklungsländer übertragen, welche nicht über zentrale Kraftwerkskapazitäten und Verteilungsnetze verfügen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mit der Verwirklichung ,Ökologischer Netzwerke‘ werden Hoffnungen zum Stopp des Verlustes der biologischen Vielfalt verknüpft. Sowohl auf gesamteuropäischer Ebene (Pan-European Ecological Network - PEEN) als auch in den einzelnen Staaten entstehen Pläne zum Aufbau von Verbundsystemen. Im föderalen Deutschland werden kleinmaßstäbliche Biotopverbundplanungen auf Landesebene aufgestellt; zum nationalen Biotopverbund bestehen erste Konzepte. Die vorliegende Arbeit ist auf diese überörtlichen, strategisch vorbereitenden Planungsebenen ausgerichtet. Ziele des Verbunds sind der Erhalt von Populationen insbesondere der gefährdeten Arten sowie die Ermöglichung von Ausbreitung und Wanderung. Aufgrund fehlender Datengrundlagen zu den Arten und Populationen ist es nicht ohne weiteres möglich, die Konzepte und Modelle der Populationsökologie in die überörtlichen Planungsebenen zu übertragen. Gemäß der o.g. Zielstellungen sollte sich aber die Planung von Verbundsystemen an den Ansprüchen der auf Verbund angewiesenen Arten orientieren. Ziel der Arbeit war die Entwicklung einer praktikablen GIS-gestützten Planungshilfe zur größtmöglichen Integration ökologischen Wissens unter der Bedingung eingeschränkter Informationsverfügbarkeit. Als Grundlagen dazu werden in Übersichtsform zunächst die globalen, europäisch-internationalen und nationalen Rahmenbedingungen und Anforderungen bezüglich des Aufbaus von Verbundsystemen zusammengestellt. Hier sind die Strategien zum PEEN hervorzuheben, die eine Integration ökologischer Inhalte insbesondere durch die Berücksichtigung räumlich-funktionaler Beziehungen fordern. Eine umfassende Analyse der landesweiten Biotopverbundplanungen der BRD zeigte die teilweise erheblichen Unterschiede zwischen den Länderplanungen auf, die es aktuell nicht ermöglichen, ein schlüssiges nationales Konzept zusammenzufügen. Nicht alle Länder haben landesweite Biotopverbundplanungen und Landeskonzepte, bei denen dem geplanten Verbund die Ansprüche von Arten zugrunde gelegt werden, gibt es nur ansatzweise. Weiterhin wurde eine zielgerichtete Eignungsprüfung bestehender GIS-basierter Modelle und Konzepte zum Verbund unter Berücksichtigung der regelmäßig in Deutschland verfügbaren Datengrundlagen durchgeführt. Da keine integrativen regelorientierten Ansätze vorhanden waren, wurde der vektorbasierte Algorithmus HABITAT-NET entwickelt. Er arbeitet mit ,Anspruchstypen‘ hinsichtlich des Habitatverbunds, die stellvertretend für unterschiedliche ökologische Gruppen von (Ziel-) Arten mit terrestrischer Ausbreitung stehen. Kombiniert wird die Fähigkeit zur Ausbreitung mit einer Grobtypisierung der Biotopbindung. Die wichtigsten Grundlagendaten bilden die jeweiligen (potenziellen) Habitate von Arten eines Anspruchstyps sowie die umgebende Landnutzung. Bei der Bildung von ,Lebensraumnetzwerken‘ (Teil I) werden gestufte ,Funktions- und Verbindungsräume‘ generiert, die zu einem räumlichen System verknüpft sind. Anschließend kann die aktuelle Zerschneidung der Netzwerke durch Verkehrstrassen aufgezeigt werden, um darauf aufbauend prioritäre Abschnitte zur Wiedervernetzung zu ermitteln (Teil II). Begleitend wird das Konzept der unzerschnittenen Funktionsräume (UFR) entworfen, mit dem die Indikation von Habitatzerschneidung auf Landschaftsebene möglich ist. Diskutiert werden schließlich die Eignung der Ergebnisse als kleinmaßstäblicher Zielrahmen, Tests zur Validierung, Vergleiche mit Verbundplanungen und verschiedene Setzungen im GIS-Algorithmus. Erläuterungen zu den Einsatzmöglichkeiten erfolgen beispielsweise für die Bereiche Biotopverbund- und Landschaftsplanung, Raumordnung, Strategische Umweltprüfung, Verkehrswegeplanung, Unterstützung des Konzeptes der Lebensraumkorridore, Kohärenz im Schutzgebietssystem NATURA 2000 und Aufbau von Umweltinformationssystemen. Schließlich wird ein Rück- und Ausblick mit der Formulierung des weiteren Forschungsbedarfs verknüpft.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. These systems provide currently relatively few structure. We discuss in this paper, how association rule mining can be adopted to analyze and structure folksonomies, and how the results can be used for ontology learning and supporting emergent semantics. We demonstrate our approach on a large scale dataset stemming from an online system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As the number of resources on the web exceeds by far the number of documents one can track, it becomes increasingly difficult to remain up to date on ones own areas of interest. The problem becomes more severe with the increasing fraction of multimedia data, from which it is difficult to extract some conceptual description of their contents. One way to overcome this problem are social bookmark tools, which are rapidly emerging on the web. In such systems, users are setting up lightweight conceptual structures called folksonomies, and overcome thus the knowledge acquisition bottleneck. As more and more people participate in the effort, the use of a common vocabulary becomes more and more stable. We present an approach for discovering topic-specific trends within folksonomies. It is based on a differential adaptation of the PageRank algorithm to the triadic hypergraph structure of a folksonomy. The approach allows for any kind of data, as it does not rely on the internal structure of the documents. In particular, this allows to consider different data types in the same analysis step. We run experiments on a large-scale real-world snapshot of a social bookmarking system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In dieser Arbeit werden die sich abzeichnenden zukünftigen Möglichkeiten, Stärken und Schwächen der Kraft-Wärme-Kopplung (KWK) untersucht. Dies geschieht vor dem Hintergrund des Klimawandels, der Integration steigender Anteile Erneuerbarer Energien in die Stromerzeugung und unter Berücksichtigung der sich damit ergebenden Herausforderungen, eine sichere und nachhaltige Stromversorgung zu gestalten. Der Fokus liegt auf der Dieselmotor-KWK und der Nutzung nachwachsender Kraftstoffe. Es wird davon ausgegangen, dass der Übergang zu einer reinen Stromerzeugung aus Erneuerbaren Energiequellen in Deutschland unter erheblicher Einbindung des hohen Potentials der kostengünstigen, umweltfreundlichen, aber in der Leistung extrem fluktuierenden Windenergie erfolgen wird. Als dezentrales Integrationswerkzeug wurde die Kraft-Wärme-Kopplung mit Dieselmotoren untersucht. Sie entspricht aufgrund ihrer großen Flexibilität und ihrer hohen Wirkungsgrade mit vergleichsweise kleinen Leistungen sehr gut den Anforderungen der gleichzeitigen dezentralen Wärmenutzung. In der Dissertation werden die Randbedingungen der Dieselmotor-KWK untersucht und beschrieben. Darauf aufbauend werden unterschiedliche Modelle der Windintegration durch KWK erarbeitet und in diversen Variationen wird der Ausgleich der Stromerzeugung aus Windenergie durch KWK simuliert. Darüber hinaus werden dezentrale KWK-Anlagen hinsichtlich eines koordinierten gemeinsamen Betriebs und hinsichtlich der optimalen Auslegung für den Windenergieausgleich betrachtet. Es wird für den beschriebenen Kontext der Erneuerbaren Energien und der Kraft-Wärme-Kopplung das Thema „Umweltwirkungen“ diskutiert. Es wird dargelegt, dass die heute verwendeten Ansätze zur Bewertung der KWK zu einer Verzerrung der Ergebnisse führen. Demgegenüber wurde mit der so genannten Outputmethode eine Methode der Ökobilanzierung vorgestellt, die, im Gegensatz zu den anderen Methoden, keine verzerrenden Annahmen in die Wirkungsabschätzung aufnimmt und somit eine eindeutige und rein wissenschaftliche Auswertung bleibt. Hiermit ist die Grundlage für die Bewertung der unterschiedlichen Technologien und Szenarien sowie für die Einordnung der KWK in den Kontext der Energieerzeugung gegeben. Mit der Outputmethode wird u.a. rechnerisch bewiesen, dass die gekoppelte Strom- und Wärmeerzeugung in KWK-Anlagen tatsächlich die optimale Nutzung der regenerativen Kraftstoffe „Biogas“ und „Pflanzenöl“ im Hinblick auf Ressourceneinsatz, Treibhausgaseinsparung und Exergieerzeugung ist. Es wurde darüber hinaus die Frage untersucht woher die für die Stromerzeugung durch Dieselmotor-KWK-Anlagen notwendige Bioenergie genommen werden kann. Es ist erwiesen, dass die in Deutschland nutzbare landwirtschaftliche Fläche nur zur Deckung eines Teils der Stromerzeugung ausreichen würde. Einheimisches Biogas und nachhaltiges importiertes Pflanzenöl, das in hohem Maße auf degradierten Böden angebaut werden sollte, können die notwendige Brennstoffenergie liefern. Um im Ausland ausreichend Pflanzenöl herstellen zu können, wird eine landwirtschaftliche Fläche von 6 bis 12 Mio. ha benötigt. Das Ergebnis ist, dass ein voller Ausgleich von Windenergie-Restlast durch KWK mit Erneuerbaren Energieträgern sinnvoll und machbar ist! Dieses Wind-KWK-DSM-System sollte durch ein Stromnetz ergänzt sein, das Wasserkraftstrom für den Großteil der Regelenergieaufgaben nutzt, und das den großräumigen Ausgleich Erneuerbarer Energien in Europa und den Nachbarregionen ermöglicht.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The global power supply stability is faced to several severe and fundamental threats, in particular steadily increasing power demand, diminishing and degrading fossil and nuclear energy resources, very harmful greenhouse gas emissions, significant energy injustice and a structurally misbalanced ecological footprint. Photovoltaic (PV) power systems are analysed in various aspects focusing on economic and technical considerations of supplemental and substitutional power supply to the constraint conventional power system. To infer the most relevant system approach for PV power plants several solar resources available for PV systems are compared. By combining the different solar resources and respective economics, two major PV systems are identified to be very competitive in almost all regions in the world. The experience curve concept is used as a key technique for the development of scenario assumptions on economic projections for the decade of the 2010s. Main drivers for cost reductions in PV systems are learning and production growth rate, thus several relevant aspects are discussed such as research and development investments, technical PV market potential, different PV technologies and the energetic sustainability of PV. Three major market segments for PV systems are identified: off-grid PV solutions, decentralised small scale on-grid PV systems (several kWp) and large scale PV power plants (tens of MWp). Mainly by application of ‘grid-parity’ and ‘fuel-parity’ concepts per country, local market and conventional power plant basis, the global economic market potential for all major PV system segments is derived. PV power plant hybridization potential of all relevant power technologies and the global power plant structure are analyzed regarding technical, economical and geographical feasibility. Key success criteria for hybrid PV power plants are discussed and comprehensively analysed for all adequate power plant technologies, i.e. oil, gas and coal fired power plants, wind power, solar thermal power (STEG) and hydro power plants. For the 2010s, detailed global demand curves are derived for hybrid PV-Fossil power plants on a per power plant, per country and per fuel type basis. The fundamental technical and economic potentials for hybrid PV-STEG, hybrid PV-Wind and hybrid PV-Hydro power plants are considered. The global resource availability for PV and wind power plants is excellent, thus knowing the competitive or complementary characteristic of hybrid PV-Wind power plants on a local basis is identified as being of utmost relevance. The complementarity of hybrid PV-Wind power plants is confirmed. As a result of that almost no reduction of the global economic PV market potential need to be expected and more complex power system designs on basis of hybrid PV-Wind power plants are feasible. The final target of implementing renewable power technologies into the global power system is a nearly 100% renewable power supply. Besides balancing facilities, storage options are needed, in particular for seasonal power storage. Renewable power methane (RPM) offers respective options. A comprehensive global and local analysis is performed for analysing a hybrid PV-Wind-RPM combined cycle gas turbine power system. Such a power system design might be competitive and could offer solutions for nearly all current energy system constraints including the heating and transportation sector and even the chemical industry. Summing up, hybrid PV power plants become very attractive and PV power systems will very likely evolve together with wind power to the major and final source of energy for mankind.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a low cost and easily deployed infrastructure for location aware computing that is built using standard Bluetooth® technologies and personal computers. Mobile devices are able to determine their location to room-level granularity with existing bluetooth technology, and to even greater resolution with the use of the recently adopted bluetooth 1.2 specification, all while maintaining complete anonymity. Various techniques for improving the speed and resolution of the system are described, along with their tradeoffs in privacy. The system is trivial to implement on a large scale – our network covering 5,000 square meters was deployed by a single student over the course of a few days at a cost of less than US$1,000.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fueled by ever-growing genomic information and rapid developments of proteomics–the large scale analysis of proteins and mapping its functional role has become one of the most important disciplines for characterizing complex cell function. For building functional linkages between the biomolecules, and for providing insight into the mechanisms of biological processes, last decade witnessed the exploration of combinatorial and chip technology for the detection of bimolecules in a high throughput and spatially addressable fashion. Among the various techniques developed, the protein chip technology has been rapid. Recently we demonstrated a new platform called “Spacially addressable protein array” (SAPA) to profile the ligand receptor interactions. To optimize the platform, the present study investigated various parameters such as the surface chemistry and role of additives for achieving high density and high-throughput detection with minimal nonspecific protein adsorption. In summary the present poster will address some of the critical challenges in protein micro array technology and the process of fine tuning to achieve the optimum system for solving real biological problems.