996 resultados para Special operations
Resumo:
In dieser Dissertation präsentieren wir zunächst eine Verallgemeinerung der üblichen Sturm-Liouville-Probleme mit symmetrischen Lösungen und erklären eine umfassendere Klasse. Dann führen wir einige neue Klassen orthogonaler Polynome und spezieller Funktionen ein, welche sich aus dieser symmetrischen Verallgemeinerung ableiten lassen. Als eine spezielle Konsequenz dieser Verallgemeinerung führen wir ein Polynomsystem mit vier freien Parametern ein und zeigen, dass in diesem System fast alle klassischen symmetrischen orthogonalen Polynome wie die Legendrepolynome, die Chebyshevpolynome erster und zweiter Art, die Gegenbauerpolynome, die verallgemeinerten Gegenbauerpolynome, die Hermitepolynome, die verallgemeinerten Hermitepolynome und zwei weitere neue endliche Systeme orthogonaler Polynome enthalten sind. All diese Polynome können direkt durch das neu eingeführte System ausgedrückt werden. Ferner bestimmen wir alle Standardeigenschaften des neuen Systems, insbesondere eine explizite Darstellung, eine Differentialgleichung zweiter Ordnung, eine generische Orthogonalitätsbeziehung sowie eine generische Dreitermrekursion. Außerdem benutzen wir diese Erweiterung, um die assoziierten Legendrefunktionen, welche viele Anwendungen in Physik und Ingenieurwissenschaften haben, zu verallgemeinern, und wir zeigen, dass diese Verallgemeinerung Orthogonalitätseigenschaft und -intervall erhält. In einem weiteren Kapitel der Dissertation studieren wir detailliert die Standardeigenschaften endlicher orthogonaler Polynomsysteme, welche sich aus der üblichen Sturm-Liouville-Theorie ergeben und wir zeigen, dass sie orthogonal bezüglich der Fisherschen F-Verteilung, der inversen Gammaverteilung und der verallgemeinerten t-Verteilung sind. Im nächsten Abschnitt der Dissertation betrachten wir eine vierparametrige Verallgemeinerung der Studentschen t-Verteilung. Wir zeigen, dass diese Verteilung gegen die Normalverteilung konvergiert, wenn die Anzahl der Stichprobe gegen Unendlich strebt. Eine ähnliche Verallgemeinerung der Fisherschen F-Verteilung konvergiert gegen die chi-Quadrat-Verteilung. Ferner führen wir im letzten Abschnitt der Dissertation einige neue Folgen spezieller Funktionen ein, welche Anwendungen bei der Lösung in Kugelkoordinaten der klassischen Potentialgleichung, der Wärmeleitungsgleichung und der Wellengleichung haben. Schließlich erklären wir zwei neue Klassen rationaler orthogonaler hypergeometrischer Funktionen, und wir zeigen unter Benutzung der Fouriertransformation und der Parsevalschen Gleichung, dass es sich um endliche Orthogonalsysteme mit Gewichtsfunktionen vom Gammatyp handelt.
Resumo:
Der Schwerpunkt dieser Arbeit liegt in der Anwendung funktionalisierter Mikrocantilever mit integrierter bimorpher Aktuation und piezo-resistiver Detektion als chemische Gassensoren für den schnellen, tragbaren und preisgünstigen Nachweis verschiedener flüchtiger Substanzen. Besondere Beachtung erfährt die Verbesserung der Cantilever-Arbeitsleistung durch den Betrieb in speziellen Modi. Weiterer Schwerpunkt liegt in der Untersuchung von spezifischen Sorptionswechselwirkungen und Anwendung von innovativen Funktionsschichten, die bedeutend auf die Sensorselektivität wirken.
Resumo:
Die q-Analysis ist eine spezielle Diskretisierung der Analysis auf einem Gitter, welches eine geometrische Folge darstellt, und findet insbesondere in der Quantenphysik eine breite Anwendung, ist aber auch in der Theorie der q-orthogonalen Polynome und speziellen Funktionen von großer Bedeutung. Die betrachteten mathematischen Objekte aus der q-Welt weisen meist eine recht komplizierte Struktur auf und es liegt daher nahe, sie mit Computeralgebrasystemen zu behandeln. In der vorliegenden Dissertation werden Algorithmen für q-holonome Funktionen und q-hypergeometrische Reihen vorgestellt. Alle Algorithmen sind in dem Maple-Package qFPS, welches integraler Bestandteil der Arbeit ist, implementiert. Nachdem in den ersten beiden Kapiteln Grundlagen geschaffen werden, werden im dritten Kapitel Algorithmen präsentiert, mit denen man zu einer q-holonomen Funktion q-holonome Rekursionsgleichungen durch Kenntnis derer q-Shifts aufstellen kann. Operationen mit q-holonomen Rekursionen werden ebenfalls behandelt. Im vierten Kapitel werden effiziente Methoden zur Bestimmung polynomialer, rationaler und q-hypergeometrischer Lösungen von q-holonomen Rekursionen beschrieben. Das fünfte Kapitel beschäftigt sich mit q-hypergeometrischen Potenzreihen bzgl. spezieller Polynombasen. Wir formulieren einen neuen Algorithmus, der zu einer q-holonomen Rekursionsgleichung einer q-hypergeometrischen Reihe mit nichttrivialem Entwicklungspunkt die entsprechende q-holonome Rekursionsgleichung für die Koeffizienten ermittelt. Ferner können wir einen neuen Algorithmus angeben, der umgekehrt zu einer q-holonomen Rekursionsgleichung für die Koeffizienten eine q-holonome Rekursionsgleichung der Reihe bestimmt und der nützlich ist, um q-holonome Rekursionen für bestimmte verallgemeinerte q-hypergeometrische Funktionen aufzustellen. Mit Formulierung des q-Taylorsatzes haben wir schließlich alle Zutaten zusammen, um das Hauptergebnis dieser Arbeit, das q-Analogon des FPS-Algorithmus zu erhalten. Wolfram Koepfs FPS-Algorithmus aus dem Jahre 1992 bestimmt zu einer gegebenen holonomen Funktion die entsprechende hypergeometrische Reihe. Wir erweitern den Algorithmus dahingehend, dass sogar Linearkombinationen q-hypergeometrischer Potenzreihen bestimmt werden können. ________________________________________________________________________________________________________________
Resumo:
The principal objective of this paper is to develop a methodology for the formulation of a master plan for renewable energy based electricity generation in The Gambia, Africa. Such a master plan aims to develop and promote renewable sources of energy as an alternative to conventional forms of energy for generating electricity in the country. A tailor-made methodology for the preparation of a 20-year renewable energy master plan focussed on electricity generation is proposed in order to be followed and verified throughout the present dissertation, as it is applied for The Gambia. The main input data for the proposed master plan are (i) energy demand analysis and forecast over 20 years and (ii) resource assessment for different renewable energy alternatives including their related power supply options. The energy demand forecast is based on a mix between Top-Down and Bottom-Up methodologies. The results are important data for future requirements of (primary) energy sources. The electricity forecast is separated in projections at sent-out level and at end-user level. On the supply side, Solar, Wind and Biomass, as sources of energy, are investigated in terms of technical potential and economic benefits for The Gambia. Other criteria i.e. environmental and social are not considered in the evaluation. Diverse supply options are proposed and technically designed based on the assessed renewable energy potential. This process includes the evaluation of the different available conversion technologies and finalizes with the dimensioning of power supply solutions, taking into consideration technologies which are applicable and appropriate under the special conditions of The Gambia. The balance of these two input data (demand and supply) gives a quantitative indication of the substitution potential of renewable energy generation alternatives in primarily fossil-fuel-based electricity generation systems, as well as fuel savings due to the deployment of renewable resources. Afterwards, the identified renewable energy supply options are ranked according to the outcomes of an economic analysis. Based on this ranking, and other considerations, a 20-year investment plan, broken down into five-year investment periods, is prepared and consists of individual renewable energy projects for electricity generation. These projects included basically on-grid renewable energy applications. Finally, a priority project from the master plan portfolio is selected for further deeper analysis. Since solar PV is the most relevant proposed technology, a PV power plant integrated to the fossil-fuel powered main electrical system in The Gambia is considered as priority project. This project is analysed by economic competitiveness under the current conditions in addition to sensitivity analysis with regard to oil and new-technology market conditions in the future.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
Numerous studies have proven an effect of a probable climate change on the hydrosphere’s different subsystems. In the 21st century global and regional redistribution of water has to be expected and it is very likely that extreme weather phenomenon will occur more frequently. From a global view the flood situation will exacerbate. In contrast to these discoveries the classical approach of flood frequency analysis provides terms like “mean flood recurrence interval”. But for this analysis to be valid there is a need for the precondition of stationary distribution parameters which implies that the flood frequencies are constant in time. Newer approaches take into account extreme value distributions with time-dependent parameters. But the latter implies a discard of the mentioned old terminology that has been used up-to-date in engineering hydrology. On the regional scale climate change affects the hydrosphere in various ways. So, the question appears to be whether in central Europe the classical approach of flood frequency analysis is not usable anymore and whether the traditional terminology should be renewed. In the present case study hydro-meteorological time series of the Fulda catchment area (6930 km²), upstream of the gauging station Bonaforth, are analyzed for the time period 1960 to 2100. At first a distributed catchment area model (SWAT2005) is build up, calibrated and finally validated. The Edertal reservoir is regulated as well by a feedback control of the catchments output in case of low water. Due to this intricacy a special modeling strategy has been necessary: The study area is divided into three SWAT basin models and an additional physically-based reservoir model is developed. To further improve the streamflow predictions of the SWAT model, a correction by an artificial neural network (ANN) has been tested successfully which opens a new way to improve hydrological models. With this extension the calibration and validation of the SWAT model for the Fulda catchment area is improved significantly. After calibration of the model for the past 20th century observed streamflow, the SWAT model is driven by high resolution climate data of the regional model REMO using the IPCC scenarios A1B, A2, and B1, to generate future runoff time series for the 21th century for the various sub-basins in the study area. In a second step flood time series HQ(a) are derived from the 21st century runoff time series (scenarios A1B, A2, and B1). Then these flood projections are extensively tested with regard to stationarity, homogeneity and statistical independence. All these tests indicate that the SWAT-predicted 21st-century trends in the flood regime are not significant. Within the projected time the members of the flood time series are proven to be stationary and independent events. Hence, the classical stationary approach of flood frequency analysis can still be used within the Fulda catchment area, notwithstanding the fact that some regional climate change has been predicted using the IPCC scenarios. It should be noted, however, that the present results are not transferable to other catchment areas. Finally a new method is presented that enables the calculation of extreme flood statistics, even if the flood time series is non-stationary and also if the latter exhibits short- and longterm persistence. This method, which is called Flood Series Maximum Analysis here, enables the calculation of maximum design floods for a given risk- or safety level and time period.
Resumo:
Der Einsatz der Particle Image Velocimetry (PIV) zur Analyse selbsterregter Strömungsphänomene und das dafür notwendige Auswerteverfahren werden in dieser Arbeit beschrieben. Zur Untersuchung von solchen Mechanismen, die in Turbo-Verdichtern als Rotierende Instabilitäten in Erscheinung treten, wird auf Datensätze zurückgegriffen, die anhand experimenteller Untersuchungen an einem ringförmigen Verdichter-Leitrad gewonnen wurden. Die Rotierenden Instabilitäten sind zeitabhängige Strömungsphänomene, die bei hohen aerodynamischen Belastungen in Verdichtergittern auftreten können. Aufgrund der fehlenden Phaseninformation kann diese instationäre Strömung mit konventionellen PIV-Systemen nicht erfasst werden. Die Kármánsche Wirbelstraße und Rotierende Instabilitäten stellen beide selbsterregte Strömungsvorgänge dar. Die Ähnlichkeit wird genutzt um die Funktionalität des Verfahrens anhand der Kármánschen Wirbelstraße nachzuweisen. Der mittels PIV zu visualisierende Wirbeltransport erfordert ein besonderes Verfahren, da ein externes Signal zur Festlegung des Phasenwinkels dieser selbsterregten Strömung nicht zur Verfügung steht. Die Methodik basiert auf der Kopplung der PIV-Technik mit der Hitzdrahtanemometrie. Die gleichzeitige Messung mittels einer zeitlich hochaufgelösten Hitzdraht-Messung ermöglicht den Zeitpunkten der PIV-Bilder einen Phasenwinkel zuzuordnen. Hierzu wird das Hitzdrahtsignal mit einem FFT-Verfahren analysiert, um die PIV-Bilder entsprechend ihrer Phasenwinkel zu gruppieren. Dafür werden die aufgenommenen Bilder auf der Zeitachse der Hitzdrahtmessungen markiert. Eine systematische Analyse des Hitzdrahtsignals in der Umgebung der PIV-Messung liefert Daten zur Festlegung der Grundfrequenz und erlaubt es, der markierten PIV-Position einen Phasenwinkel zuzuordnen. Die sich aus den PIV-Bildern einer Klasse ergebenden Geschwindigkeitskomponenten werden anschließend gemittelt. Aus den resultierenden Bildern jeder Klasse ergibt sich das zweidimensionale zeitabhängige Geschwindigkeitsfeld, in dem die Wirbelwanderung der Kármánschen Wirbelstraße ersichtlich wird. In hierauf aufbauenden Untersuchungen werden Zeitsignale aus Messungen in einem Verdichterringgitter analysiert. Dabei zeigt sich, dass zusätzlich Filterfunktionen erforderlich sind. Im Ergebnis wird schließlich deutlich, dass die Übertragung der anhand der Kármánschen Wirbelstraße entwickelten Methode nur teilweise gelingt und weitere Forschungsarbeiten erforderlich sind.
Resumo:
The possibility to develop automatically running models which can capture some of the most important factors driving the urban climate would be very useful for many planning aspects. With the help of these modulated climate data, the creation of the typically used “Urban Climate Maps” (UCM) will be accelerated and facilitated. This work describes the development of a special ArcGIS software extension, along with two support databases to achieve this functionality. At the present time, lacking comparability between different UCMs and imprecise planning advices going along with the significant technical problems of manually creating conventional maps are central issues. Also inflexibility and static behaviour are reducing the maps’ practicality. From experi-ence, planning processes are formed more productively, namely to implant new planning parameters directly via the existing work surface to map the impact of the data change immediately, if pos-sible. In addition to the direct climate figures, information of other planning areas (like regional characteristics / developments etc.) have to be taken into account to create the UCM as well. Taking all these requirements into consideration, an automated calculation process of urban climate impact parameters will serve to increase the creation of homogenous UCMs efficiently.
Resumo:
The challenge of reducing carbon emission and achieving emission target until 2050, has become a key development strategy of energy distribution for each country. The automotive industries, as the important portion of implementing energy requirements, are making some related researches to meet energy requirements and customer requirements. For modern energy requirements, it should be clean, green and renewable. For customer requirements, it should be economic, reliable and long life time. Regarding increasing requirements on the market and enlarged customer quantity, EVs and PHEV are more and more important for automotive manufactures. Normally for EVs and PHEV there are two important key parts, which are battery package and power electronics composing of critical components. A rechargeable battery is a quite important element for achieving cost competitiveness, which is mainly used to story energy and provide continue energy to drive an electric motor. In order to recharge battery and drive the electric motor, power electronics group is an essential bridge to convert different energy types for both of them. In modern power electronics there are many different topologies such as non-isolated and isolated power converters which can be used to implement for charging battery. One of most used converter topology is multiphase interleaved power converter, pri- marily due to its prominent advantages, which is frequently employed to obtain optimal dynamic response, high effciency and compact converter size. Concerning its usage, many detailed investigations regarding topology, control strategy and devices have been done. In this thesis, the core research is to investigate some branched contents in term of issues analysis and optimization approaches of building magnetic component. This work starts with an introduction of reasons of developing EVs and PEHV and an overview of different possible topologies regarding specific application requirements. Because of less components, high reliability, high effciency and also no special safety requirement, non-isolated multiphase interleaved converter is selected as the basic research topology of founded W-charge project for investigating its advantages and potential branches on using optimized magnetic components. Following, all those proposed aspects and approaches are investigated and analyzed in details in order to verify constrains and advantages through using integrated coupled inductors. Furthermore, digital controller concept and a novel tapped-inductor topology is proposed for multiphase power converter and electric vehicle application.
Resumo:
General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.
Resumo:
Object recognition in the visual cortex is based on a hierarchical architecture, in which specialized brain regions along the ventral pathway extract object features of increasing levels of complexity, accompanied by greater invariance in stimulus size, position, and orientation. Recent theoretical studies postulate a non-linear pooling function, such as the maximum (MAX) operation could be fundamental in achieving such invariance. In this paper, we are concerned with neurally plausible mechanisms that may be involved in realizing the MAX operation. Four canonical circuits are proposed, each based on neural mechanisms that have been previously discussed in the context of cortical processing. Through simulations and mathematical analysis, we examine the relative performance and robustness of these mechanisms. We derive experimentally verifiable predictions for each circuit and discuss their respective physiological considerations.
Resumo:
Key operations for Authors of QM Perception Assessments
Resumo:
Exam questions and solutions in LaTex
Resumo:
Exam questions and solutions in PDF