912 resultados para applications design
Resumo:
Salient pole brushless alternators coupled to IC engines are extensively used as stand-by power supply units for meeting in- dustrial power demands. Design of such generators demands high power to weight ratio, high e ciency and low cost per KVA out- put. Moreover, the performance characteristics of such machines like voltage regulation and short circuit ratio (SCR) are critical when these machines are put into parallel operation and alterna- tors for critical applications like defence and aerospace demand very low harmonic content in the output voltage. While designing such alternators, accurate prediction of machine characteristics, including total harmonic distortion (THD) is essential to mini- mize development cost and time. Total harmonic distortion in the output voltage of alternators should be as low as possible especially when powering very sophis- ticated and critical applications. The output voltage waveform of a practical AC generator is replica of the space distribution of the ux density in the air gap and several factors such as shape of the rotor pole face, core saturation, slotting and style of coil disposition make the realization of a sinusoidal air gap ux wave impossible. These ux harmonics introduce undesirable e ects on the alternator performance like high neutral current due to triplen harmonics, voltage distortion, noise, vibration, excessive heating and also extra losses resulting in poor e ciency, which in turn necessitate de-rating of the machine especially when connected to non-linear loads. As an important control unit of brushless alternator, the excitation system and its dynamic performance has a direct impact on alternator's stability and reliability. The thesis explores design and implementation of an excitation i system utilizing third harmonic ux in the air gap of brushless al- ternators, using an additional auxiliary winding, wound for 1=3rd pole pitch, embedded into the stator slots and electrically iso- lated from the main winding. In the third harmonic excitation system, the combined e ect of two auxiliary windings, one with 2=3rd pitch and another third harmonic winding with 1=3rd pitch, are used to ensure good voltage regulation without an electronic automatic voltage regulator (AVR) and also reduces the total harmonic content in the output voltage, cost e ectively. The design of the third harmonic winding by analytic methods demands accurate calculation of third harmonic ux density in the air gap of the machine. However, precise estimation of the amplitude of third harmonic ux in the air gap of a machine by conventional design procedures is di cult due to complex geome- try of the machine and non-linear characteristics of the magnetic materials. As such, prediction of the eld parameters by conven- tional design methods is unreliable and hence virtual prototyping of the machine is done to enable accurate design of the third har- monic excitation system. In the design and development cycle of electrical machines, it is recognized that the use of analytical and experimental methods followed by expensive and in exible prototyping is time consum- ing and no longer cost e ective. Due to advancements in com- putational capabilities over recent years, nite element method (FEM) based virtual prototyping has become an attractive al- ternative to well established semi-analytical and empirical design methods as well as to the still popular trial and error approach followed by the costly and time consuming prototyping. Hence, by virtually prototyping the alternator using FEM, the important performance characteristics of the machine are predicted. Design of third harmonic excitation system is done with the help of results obtained from virtual prototype of the machine. Third harmonic excitation (THE) system is implemented in a 45 KVA ii experimental machine and experiments are conducted to validate the simulation results. Simulation and experimental results show that by utilizing third harmonic ux in the air gap of the ma- chine for excitation purposes during loaded conditions, triplen harmonic content in the output phase voltage is signi cantly re- duced. The prototype machine with third harmonic excitation system designed and developed based on FEM analysis proved to be economical due to its simplicity and has the added advan- tage of reduced harmonics in the output phase voltage.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
Lasers play an important role for medical, sensoric and data storage devices. This thesis is focused on design, technology development, fabrication and characterization of hybrid ultraviolet Vertical-Cavity Surface-Emitting Lasers (UV VCSEL) with organic laser-active material and inorganic distributed Bragg reflectors (DBR). Multilayer structures with different layer thicknesses, refractive indices and absorption coefficients of the inorganic materials were studied using theoretical model calculations. During the simulations the structure parameters such as materials and thicknesses have been varied. This procedure was repeated several times during the design optimization process including also the feedback from technology and characterization. Two types of VCSEL devices were investigated. The first is an index coupled structure consisting of bottom and top DBR dielectric mirrors. In the space in between them is the cavity, which includes active region and defines the spectral gain profile. In this configuration the maximum electrical field is concentrated in the cavity and can destroy the chemical structure of the active material. The second type of laser is a so called complex coupled VCSEL. In this structure the active material is placed not only in the cavity but also in parts of the DBR structure. The simulations show that such a distribution of the active material reduces the required pumping power for reaching lasing threshold. High efficiency is achieved by substituting the dielectric material with high refractive index for the periods closer to the cavity. The inorganic materials for the DBR mirrors have been deposited by Plasma- Enhanced Chemical Vapor Deposition (PECVD) and Dual Ion Beam Sputtering (DIBS) machines. Extended optimizations of the technological processes have been performed. All the processes are carried out in a clean room Class 1 and Class 10000. The optical properties and the thicknesses of the layers are measured in-situ by spectroscopic ellipsometry and spectroscopic reflectometry. The surface roughness is analyzed by atomic force microscopy (AFM) and images of the devices are taken with scanning electron microscope (SEM). The silicon dioxide (SiO2) and silicon nitride (Si3N4) layers deposited by the PECVD machine show defects of the material structure and have higher absorption in the ultra violet range compared to ion beam deposition (IBD). This results in low reflectivity of the DBR mirrors and also reduces the optical properties of the VCSEL devices. However PECVD has the advantage that the stress in the layers can be tuned and compensated, in contrast to IBD at the moment. A sputtering machine Ionsys 1000 produced by Roth&Rau company, is used for the deposition of silicon dioxide (SiO2), silicon nitride (Si3N4), aluminum oxide (Al2O3) and zirconium dioxide (ZrO2). The chamber is equipped with main (sputter) and assisted ion sources. The dielectric materials were optimized by introducing additional oxygen and nitrogen into the chamber. DBR mirrors with different material combinations were deposited. The measured optical properties of the fabricated multilayer structures show an excellent agreement with the results of theoretical model calculations. The layers deposited by puttering show high compressive stress. As an active region a novel organic material with spiro-linked molecules is used. Two different materials have been evaporated by utilizing a dye evaporation machine in the clean room of the department Makromolekulare Chemie und Molekulare Materialien (mmCmm). The Spiro-Octopus-1 organic material has a maximum emission at the wavelength λemission = 395 nm and the Spiro-Pphenal has a maximum emission at the wavelength λemission = 418 nm. Both of them have high refractive index and can be combined with low refractive index materials like silicon dioxide (SiO2). The sputtering method shows excellent optical quality of the deposited materials and high reflection of the multilayer structures. The bottom DBR mirrors for all VCSEL devices were deposited by the DIBS machine, whereas the top DBR mirror deposited either by PECVD or by combination of PECVD and DIBS. The fabricated VCSEL structures were optically pumped by nitrogen laser at wavelength λpumping = 337 nm. The emission was measured by spectrometer. A radiation of the VCSEL structure at wavelength 392 nm and 420 nm is observed.
Resumo:
Optische Spektroskopie ist eine sehr wichtige Messtechnik mit einem hohen Potential für zahlreiche Anwendungen in der Industrie und Wissenschaft. Kostengünstige und miniaturisierte Spektrometer z.B. werden besonders für moderne Sensorsysteme “smart personal environments” benötigt, die vor allem in der Energietechnik, Messtechnik, Sicherheitstechnik (safety and security), IT und Medizintechnik verwendet werden. Unter allen miniaturisierten Spektrometern ist eines der attraktivsten Miniaturisierungsverfahren das Fabry Pérot Filter. Bei diesem Verfahren kann die Kombination von einem Fabry Pérot (FP) Filterarray und einem Detektorarray als Mikrospektrometer funktionieren. Jeder Detektor entspricht einem einzelnen Filter, um ein sehr schmales Band von Wellenlängen, die durch das Filter durchgelassen werden, zu detektieren. Ein Array von FP-Filter wird eingesetzt, bei dem jeder Filter eine unterschiedliche spektrale Filterlinie auswählt. Die spektrale Position jedes Bandes der Wellenlänge wird durch die einzelnen Kavitätshöhe des Filters definiert. Die Arrays wurden mit Filtergrößen, die nur durch die Array-Dimension der einzelnen Detektoren begrenzt werden, entwickelt. Allerdings erfordern die bestehenden Fabry Pérot Filter-Mikrospektrometer komplizierte Fertigungsschritte für die Strukturierung der 3D-Filter-Kavitäten mit unterschiedlichen Höhen, die nicht kosteneffizient für eine industrielle Fertigung sind. Um die Kosten bei Aufrechterhaltung der herausragenden Vorteile der FP-Filter-Struktur zu reduzieren, wird eine neue Methode zur Herstellung der miniaturisierten FP-Filtern mittels NanoImprint Technologie entwickelt und präsentiert. In diesem Fall werden die mehreren Kavitäten-Herstellungsschritte durch einen einzigen Schritt ersetzt, die hohe vertikale Auflösung der 3D NanoImprint Technologie verwendet. Seit dem die NanoImprint Technologie verwendet wird, wird das auf FP Filters basierende miniaturisierte Spectrometer nanospectrometer genannt. Ein statischer Nano-Spektrometer besteht aus einem statischen FP-Filterarray auf einem Detektorarray (siehe Abb. 1). Jeder FP-Filter im Array besteht aus dem unteren Distributed Bragg Reflector (DBR), einer Resonanz-Kavität und einen oberen DBR. Der obere und untere DBR sind identisch und bestehen aus periodisch abwechselnden dünnen dielektrischen Schichten von Materialien mit hohem und niedrigem Brechungsindex. Die optischen Schichten jeder dielektrischen Dünnfilmschicht, die in dem DBR enthalten sind, entsprechen einen Viertel der Design-Wellenlänge. Jeder FP-Filter wird einer definierten Fläche des Detektorarrays zugeordnet. Dieser Bereich kann aus einzelnen Detektorelementen oder deren Gruppen enthalten. Daher werden die Seitenkanal-Geometrien der Kavität aufgebaut, die dem Detektor entsprechen. Die seitlichen und vertikalen Dimensionen der Kavität werden genau durch 3D NanoImprint Technologie aufgebaut. Die Kavitäten haben Unterschiede von wenigem Nanometer in der vertikalen Richtung. Die Präzision der Kavität in der vertikalen Richtung ist ein wichtiger Faktor, der die Genauigkeit der spektralen Position und Durchlässigkeit des Filters Transmissionslinie beeinflusst.
Resumo:
In this work investigation of the QDs formation and the fabrication of QD based semiconductor lasers for telecom applications are presented. InAs QDs grown on AlGaInAs lattice matched to InP substrates are used to fabricate lasers operating at 1.55 µm, which is the central wavelength for far distance data transmission. This wavelength is used due to its minimum attenuation in standard glass fibers. The incorporation of QDs in this material system is more complicated in comparison to InAs QDs in the GaAs system. Due to smaller lattice mismatch the formation of circular QDs, elongated QDs and quantum wires is possible. The influence of the different growth conditions, such as the growth temperature, beam equivalent pressure, amount of deposited material on the formation of the QDs is investigated. It was already demonstrated that the formation process of QDs can be changed by the arsenic species. The formation of more round shaped QDs was observed during the growth of QDs with As2, while for As4 dash-like QDs. In this work only As2 was used for the QD growth. Different growth parameters were investigated to optimize the optical properties, like photoluminescence linewidth, and to implement those QD ensembles into laser structures as active medium. By the implementation of those QDs into laser structures a full width at half maximum (FWHM) of 30 meV was achieved. Another part of the research includes the investigation of the influence of the layer design of lasers on its lasing properties. QD lasers were demonstrated with a modal gain of more than 10 cm-1 per QD layer. Another achievement is the large signal modulation with a maximum data rate of 15 Gbit/s. The implementation of optimized QDs in the laser structure allows to increase the modal gain up to 12 cm-1 per QD layer. A reduction of the waveguide layer thickness leads to a shorter transport time of the carriers into the active region and as a result a data rate up to 22 Gbit/s was achieved, which is so far the highest digital modulation rate obtained with any 1.55 µm QD laser. The implementation of etch stop layers into the laser structure provide the possibility to fabricate feedback gratings with well defined geometries for the realization of DFB lasers. These DFB lasers were fabricated by using a combination of dry and wet etching. Single mode operation at 1.55 µm with a high side mode suppression ratio of 50 dB was achieved.
Resumo:
We discuss a formulation for active example selection for function learning problems. This formulation is obtained by adapting Fedorov's optimal experiment design to the learning problem. We specifically show how to analytically derive example selection algorithms for certain well defined function classes. We then explore the behavior and sample complexity of such active learning algorithms. Finally, we view object detection as a special case of function learning and show how our formulation reduces to a useful heuristic to choose examples to reduce the generalization error.
Resumo:
The memory hierarchy is the main bottleneck in modern computer systems as the gap between the speed of the processor and the memory continues to grow larger. The situation in embedded systems is even worse. The memory hierarchy consumes a large amount of chip area and energy, which are precious resources in embedded systems. Moreover, embedded systems have multiple design objectives such as performance, energy consumption, and area, etc. Customizing the memory hierarchy for specific applications is a very important way to take full advantage of limited resources to maximize the performance. However, the traditional custom memory hierarchy design methodologies are phase-ordered. They separate the application optimization from the memory hierarchy architecture design, which tend to result in local-optimal solutions. In traditional Hardware-Software co-design methodologies, much of the work has focused on utilizing reconfigurable logic to partition the computation. However, utilizing reconfigurable logic to perform the memory hierarchy design is seldom addressed. In this paper, we propose a new framework for designing memory hierarchy for embedded systems. The framework will take advantage of the flexible reconfigurable logic to customize the memory hierarchy for specific applications. It combines the application optimization and memory hierarchy design together to obtain a global-optimal solution. Using the framework, we performed a case study to design a new software-controlled instruction memory that showed promising potential.
Resumo:
This paper describes a new reliable method, based on modal interval analysis (MIA) and set inversion (SI) techniques, for the characterization of solution sets defined by quantified constraints satisfaction problems (QCSP) over continuous domains. The presented methodology, called quantified set inversion (QSI), can be used over a wide range of engineering problems involving uncertain nonlinear models. Finally, an application on parameter identification is presented
Resumo:
In this session we look at the how to use noun verb parsing to try and identify the building blocks of a problem, so that we can start to create object oriented solutions. We also look at some of the challenges of software engineering, and the processes that software engineers use to meet them, and finally we take a look at some more Design Patterns that may help us reuse well known and effective solutions in our own designs.
Resumo:
La aplicación de materiales compuestos de matriz polimérica reforzados mediante fibras largas (FRP, Fiber Reinforced Plastic), está en gradual crecimiento debido a las buenas propiedades específicas y a la flexibilidad en el diseño. Uno de los mayores consumidores es la industria aeroespacial, dado que la aplicación de estos materiales tiene claros beneficios económicos y medioambientales. Cuando los materiales compuestos se aplican en componentes estructurales, se inicia un programa de diseño donde se combinan ensayos reales y técnicas de análisis. El desarrollo de herramientas de análisis fiables que permiten comprender el comportamiento mecánico de la estructura, así como reemplazar muchos, pero no todos, los ensayos reales, es de claro interés. Susceptibilidad al daño debido a cargas de impacto fuera del plano es uno de los aspectos de más importancia que se tienen en cuenta durante el proceso de diseño de estructuras de material compuesto. La falta de conocimiento de los efectos del impacto en estas estructuras es un factor que limita el uso de estos materiales. Por lo tanto, el desarrollo de modelos de ensayo virtual mecánico para analizar la resistencia a impacto de una estructura es de gran interés, pero aún más, la predicción de la resistencia residual después del impacto. En este sentido, el presente trabajo abarca un amplio rango de análisis de eventos de impacto a baja velocidad en placas laminadas de material compuesto, monolíticas, planas, rectangulares, y con secuencias de apilamiento convencionales. Teniendo en cuenta que el principal objetivo del presente trabajo es la predicción de la resistencia residual a compresión, diferentes tareas se llevan a cabo para favorecer el adecuado análisis del problema. Los temas que se desarrollan son: la descripción analítica del impacto, el diseño y la realización de un plan de ensayos experimentales, la formulación e implementación de modelos constitutivos para la descripción del comportamiento del material, y el desarrollo de ensayos virtuales basados en modelos de elementos finitos en los que se usan los modelos constitutivos implementados.
Resumo:
Por volta da década de 90, foram descobertos na família Camelidae anticorpos desprovidos de cadeias leves e em que o seu domínio variável era constituído unicamente por cadeias pesadas (VHH) e dois domínios constantes (CH2 e CH3). Estes fragmentos passaram a ser conhecidos por Nanoanticorpos, não só pelo seu pequeno tamanho e flexibilidade, mas também por se tratar de uma nova geração de anticorpos terapêuticos, os quais apresentam várias vantagens face aos anticorpos convencionais, uma vez que não são imunogénicos e têm uma alta estabilidade térmica e química, entre tantas outras características inerentes. As suas aplicações são diversas: podem ser usados como tratamento e diagnóstico médico, na veiculação de fármacos e no desenvolvimento de vacinas. Uma das tecnologias moleculares mais usadas na clonagem e expressão dos Nanoanticorpos é a tecnologia de «Phage Display» que pode ser categorizada em duas vertentes: o sistema vector de fago e o sistema vector de fagemídeo. Os vectores fágicos mais usados são os bacteriófagos filamentosos, como o M13, capazes de infetar bactérias gram negativas, como a Escherichia coli. Trata-se de uma ferramenta biotecnológica poderosa e promissora, destacando-se na área da medicina.
Resumo:
Active queue management (AQM) policies are those policies of router queue management that allow for the detection of network congestion, the notification of such occurrences to the hosts on the network borders, and the adoption of a suitable control policy. This paper proposes the adoption of a fuzzy proportional integral (FPI) controller as an active queue manager for Internet routers. The analytical design of the proposed FPI controller is carried out in analogy with a proportional integral (PI) controller, which recently has been proposed for AQM. A genetic algorithm is proposed for tuning of the FPI controller parameters with respect to optimal disturbance rejection. In the paper the FPI controller design metodology is described and the results of the comparison with random early detection (RED), tail drop, and PI controller are presented.
Resumo:
This investigation deals with the question of when a particular population can be considered to be disease-free. The motivation is the case of BSE where specific birth cohorts may present distinct disease-free subpopulations. The specific objective is to develop a statistical approach suitable for documenting freedom of disease, in particular, freedom from BSE in birth cohorts. The approach is based upon a geometric waiting time distribution for the occurrence of positive surveillance results and formalizes the relationship between design prevalence, cumulative sample size and statistical power. The simple geometric waiting time model is further modified to account for the diagnostic sensitivity and specificity associated with the detection of disease. This is exemplified for BSE using two different models for the diagnostic sensitivity. The model is furthermore modified in such a way that a set of different values for the design prevalence in the surveillance streams can be accommodated (prevalence heterogeneity) and a general expression for the power function is developed. For illustration, numerical results for BSE suggest that currently (data status September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity.