918 resultados para multi-layer dielectric thin film


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the development of a polyimide/SU-8 catheter-tip MEMS gauge pressure sensor. Finite element analysis was used to investigate critical parameters, impacting on the device design and sensing characteristics. The sensing element of the device was fabricated by polyimide-based micromachining on a flexible membrane, using embedded thin-film metallic wires as piezoresistive elements. A chamber containing this flexible membrane was sealed using an adapted SU-8 bonding technique. The device was evaluated experimentally and its overall performance compared with a commercial silicon-based pressure sensor. Furthermore, the device use was demonstrated by measuring blood pressure and heart rate in vivo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Propagation of localized orientational waves, as imaged by Brewster angle microscopy, is induced by low intensity linearly polarized light inside axisymmetric smectic-C confined domains in a photosensitive molecular thin film at the air/water interface (Langmuir monolayer). Results from numerical simulations of a model that couples photoreorientational effects and long-range elastic forces are presented. Differences are stressed between our scenario and the paradigmatic wave phenomena in excitable chemical media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Propagation of localized orientational waves, as imaged by Brewster angle microscopy, is induced by low intensity linearly polarized light inside axisymmetric smectic-C confined domains in a photosensitive molecular thin film at the air/water interface (Langmuir monolayer). Results from numerical simulations of a model that couples photoreorientational effects and long-range elastic forces are presented. Differences are stressed between our scenario and the paradigmatic wave phenomena in excitable chemical media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, aggregate migration patterns during fluid concrete castings are studied through experiments, dimensionless approach and numerical modeling. The experimental results obtained on two beams show that gravity induced migration is primarily affecting the coarsest aggregates resulting in a decrease of coarse aggregates volume fraction with the horizontal distance from the pouring point and in a puzzling vertical multi-layer structure. The origin of this multi layer structure is discussed and analyzed with the help of numerical simulations of free surface flow. Our results suggest that it finds its origin in the non Newtonian nature of fresh concrete and that increasing casting rate shall decrease the magnitude of gravity induced particle migration. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effects of polyolefins, neoprene, styrene-butadiene-styrene (SBS) block copolymers, styrene-butadiene rubber (SBR) latex, and hydrated lime on two asphalt cements were evaluated. Physical and chemical tests were performed on a total of 16 binder blends. Asphalt concrete mixes were prepared and tested with these modified binders and two aggregates (crushed limestone and gravel), each at three asphalt content levels. Properties evaluated on the modified binders (original and thin-film oven aged) included: viscosity at 25 deg C, 60 deg C and 135 deg C with capillary tube and cone-plate viscometer, penetration at 5 deg C and 25 deg C, softening point, force ductility, and elastic recovery at 10 deg C, dropping ball test, tensile strength, and toughness and tenacity tests at 25 deg C. From these the penetration index, the viscosity-temperature susceptibility, the penetration-viscosity number, the critical low-temperature, long loading-time stiffness, and the cracking temperature were calculated. In addition, the binders were studied with x-ray diffraction, reflected fluorescence microscopy, and high-performance liquid chromatography techniques. Engineering properties evaluated on the 72 asphalt concrete mixes containing additives included: Marshall stability and flow, Marshall stiffness, voids properties, resilient modulus, indirect tensile strength, permanent deformation (creep), and effects of moisture by vacuum-saturation and Lottman treatments. Pavement sections of varied asphalt concrete thicknesses and containing different additives were compared to control mixes in terms of structural responses and pavement lives for different subgrades. Although all of the additives tested improved at least one aspect of the binder/mixture properties, no additive was found to improve all the relevant binder/mixture properties at the same time. On the basis of overall considerations, the optimum beneficial effects can be expected when the additives are used in conjunction with softer grade asphalts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When mixing asphalt in thin film and at high temperatures, as in the production of asphalt concrete, it has been shown that asphalt will harden due essentially to two factors: (1) losses of volatiles and (2) oxidation. The degree of hardening as expressed by percent loss in penetration varied from as low as 7% to about 57% depending on mixing temperatures, aggregate types, gradation, asphalt content, penetration and other characteristics of asphalts used. Methods used to predict hardening during mixing include loss on heat and thin film oven tests, with the latter showing better correlation with the field findings. However, information on other physical and chemical changes that may occur as a result of mixing in the production of hot-mix asphaltic concrete is limited, The purpose of this research project was to ascertain the changes of asphalt cement properties, both physical and chemical, during mixing operation and to determine whether one or more of the several tests of asphalt cements were critical enough to indicate these changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the hazardous nature of chemical asphalt extraction agents, nuclear gauges have become an increasingly popular method of determining the asphalt content of a bituminous mix. This report details the results of comparisons made between intended, tank stick, extracted, and nuclear asphalt content determinations. A total of 315 sets of comparisons were made on samples that represented 110 individual mix designs and 99 paving projects. All samples were taken from 1987 construction projects. In addition to the comparisons made, seventeen asphalt cement samples were recovered for determination of penetration and viscosity. Results were compared to similar tests performed on the asphalt assurance samples in an attempt to determine the amount of asphalt hardening that can be expected due to the hot mix process. Conclusions of the report are: 1. Compared to the reflux extraction procedure, nuclear asphalt content gauges determine asphalt content of bituminous mixes with much greater accuracy and comparable precision. 2. As a means for determining asphalt content, the nuclear procedure should be used as an alternate to chemical extractions whenever possible. 3. Based on penetration and viscosity results, softer grade asphalts undergo a greater degree 'of hardening due to hot mix processing than do harder grades, and asphalt viscosity changes caused by the mixing process are subject to much more variability than are changes in penetration. 4. Based on changes in penetration and viscosity, the Thin Film Oven Test provides a reasonable means of estimating how much asphalt hardening can be anticipated due to exposure to the hot mix processing environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lasers are essential tools for cell isolation and monolithic interconnection in thin-film-silicon photovoltaic technologies. Laser ablation of transparent conductive oxides (TCOs), amorphous silicon structures and back contact removal are standard processes in industry for monolithic device interconnection. However, material ablation with minimum debris and small heat affected zone is one of the main difficulty is to achieve, to reduce costs and to improve device efficiency. In this paper we present recent results in laser ablation of photovoltaic materials using excimer and UV wavelengths of diode-pumped solid-state (DPSS) laser sources. We discuss results concerning UV ablation of different TCO and thin-film silicon (a-Si:H and nc-Si:H), focussing our study on ablation threshold measurements and process-quality assessment using advanced optical microscopy techniques. In that way we show the advantages of using UV wavelengths for minimizing the characteristic material thermal affection of laser irradiation in the ns regime at higher wavelengths. Additionally we include preliminary results of selective ablation of film on film structures irradiating from the film side (direct writing configuration) including the problem of selective ablation of ZnO films on a-Si:H layers. In that way we demonstrate the potential use of UV wavelengths of fully commercial laser sources as an alternative to standard backscribing process in device fabrication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A contract for Project HR-20 "Treating Loess, Fine Sands and Soft Limestones with Liquid Binders" of the Iowa Highway Research Board was awarded in December, 1951, to the Iowa Engineering Experiment Station of Iowa State University as its Project 295-S. By 1954 the studies of the fine materials and asphalts had progressed quite well, and a method of treating the fine materials, called the atomization process, had been applied. A study was begun in 1954 to see if some of the problems of the atomization process could be solved with the use of foamed asphalt. Foamed asphalt has several advantages. The foaming of asphalt increases its volume, reduces its viscosity, and alters its surface tension so that it will adhere tenaciously to solids. Foamed asphalt displaces moisture from the surface of a solid and coats it with a thin film. Foamed asphalt can permeate deeply into damp soils. In the past these unusual characteristics were considered nuisances to be avoided if possible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrocaloric cooling based on ability of material to change temperature by applying an electric field under adiabatic conditions is relatively new and challenging direction of ferroelectrics research. In this work we report about analytical, simulation and experimental data for BaSrTiO3 thin film and bulk ceramic samples. Detailed discussion of a theoretical base of the electrocaloric effect is included. Demonstrated experimental and computational results exemplify rational approach to a problem of solid-state cooler construction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kirjallisuusosassa käsiteltiin nanosuodatus-, käänteisosmoosi- ja elektrodialyysitekniikoita liuosten puhdistuksessa. Nanosuodatuksella ja käänteisosmoosilla voidaan liuottimesta erottaa pienen moolimassan omaavia liuenneita aineita ohuen kalvon avulla. Nanosuodatuksessa ja käänteisosmoosissa ajavana voimana on paine, jonka tulee ylittää liuoksen osmoottinen paine. Elektrodialyysissä ajavana voimana toimii sähköpotentiaaliero. Tekniikka käyttää hyväkseen ionien tai molekyylien kykyä johtaa sähköä. Elektrodialyysillä voidaan liuoksesta erottaa toisistaan varauksettomat ja varaukselliset komponentit sähköä johtavan membraanin avulla. Kokeellisessa osassa väkevää ureavesiliuosta suodatettiin nanosuodatus- ja käänteisosmoosikalvoilla tutkien paineen, lämpötilan ja konsentroitumisen vaikutusta vuohonja retentioon. Tarkoituksena oli saada urea tuotteena permeaattiin ja epäpuhtaudet erottumaan retentaattiin. Permeaattien epäpuhtauksien pitoisuuksia verrattiin tuotteen spesifikaation raja-arvoihin. Suodatukset tehtiin Lappeenrannan teknillisen yliopiston tiloissa DSS Labstak M20 suotimella. Työssä käytettiin NF1-, NF2-, NF270-, NF-, NF90-, Desal-5 DK-, OPMN-P 70- ja TFC ULP-kalvoja. Nanosuodatuskalvot NF2- ja NF270 antoivat parhaan vuon ja erotuskyvyn suhteen puhdistettaessa urealiuosta. Paineen noustessa kalvojen retentiot paranivat. Lämpötilan noustessa vuo parani, joskin täytyy huomioida urean kiihtyvä hajoaminen lähestyttäessä 40 °C astetta. Kalvojen kestävyyttä ureasuodatuksissa ei voitu näiden kokeiden avulla varmentaa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé L'eau est souvent considérée comme une substance ordinaire puisque elle est très commune dans la nature. En fait elle est la plus remarquable de toutes les substances. Sans l'eau la vie sur la terre n'existerait pas. L'eau représente le composant majeur de la cellule vivante, formant typiquement 70 à 95% de la masse cellulaire et elle fournit un environnement à d'innombrables organismes puisque elle couvre 75% de la surface de terre. L'eau est une molécule simple faite de deux atomes d'hydrogène et un atome d'oxygène. Sa petite taille semble en contradiction avec la subtilité de ses propriétés physiques et chimiques. Parmi celles-là, le fait que, au point triple, l'eau liquide est plus dense que la glace est particulièrement remarquable. Malgré son importance particulière dans les sciences de la vie, l'eau est systématiquement éliminée des spécimens biologiques examinés par la microscopie électronique. La raison en est que le haut vide du microscope électronique exige que le spécimen biologique soit solide. Pendant 50 ans la science de la microscopie électronique a adressé ce problème résultant en ce moment en des nombreuses techniques de préparation dont l'usage est courrant. Typiquement ces techniques consistent à fixer l'échantillon (chimiquement ou par congélation), remplacer son contenu d'eau par un plastique doux qui est transformé à un bloc rigide par polymérisation. Le bloc du spécimen est coupé en sections minces (d’environ 50 nm) avec un ultramicrotome à température ambiante. En général, ces techniques introduisent plusieurs artefacts, principalement dû à l'enlèvement d'eau. Afin d'éviter ces artefacts, le spécimen peut être congelé, coupé et observé à basse température. Cependant, l'eau liquide cristallise lors de la congélation, résultant en une importante détérioration. Idéalement, l'eau liquide est solidifiée dans un état vitreux. La vitrification consiste à refroidir l'eau si rapidement que les cristaux de glace n'ont pas de temps de se former. Une percée a eu lieu quand la vitrification d'eau pure a été découverte expérimentalement. Cette découverte a ouvert la voie à la cryo-microscopie des suspensions biologiques en film mince vitrifié. Nous avons travaillé pour étendre la technique aux spécimens épais. Pour ce faire les échantillons biologiques doivent être vitrifiés, cryo-coupées en sections vitreuse et observées dans une cryo-microscope électronique. Cette technique, appelée la cryo- microscopie électronique des sections vitrifiées (CEMOVIS), est maintenant considérée comme étant la meilleure façon de conserver l'ultrastructure de tissus et cellules biologiques dans un état très proche de l'état natif. Récemment, cette technique est devenue une méthode pratique fournissant des résultats excellents. Elle a cependant, des limitations importantes, la plus importante d'entre elles est certainement dû aux artefacts de la coupe. Ces artefacts sont la conséquence de la nature du matériel vitreux et le fait que les sections vitreuses ne peuvent pas flotter sur un liquide comme c'est le cas pour les sections en plastique coupées à température ambiante. Le but de ce travail a été d'améliorer notre compréhension du processus de la coupe et des artefacts de la coupe. Nous avons ainsi trouvé des conditions optimales pour minimiser ou empêcher ces artefacts. Un modèle amélioré du processus de coupe et une redéfinitions des artefacts de coupe sont proposés. Les résultats obtenus sous ces conditions sont présentés et comparés aux résultats obtenus avec les méthodes conventionnelles. Abstract Water is often considered to be an ordinary substance since it is transparent, odourless, tasteless and it is very common in nature. As a matter of fact it can be argued that it is the most remarkable of all substances. Without water life on Earth would not exist. Water is the major component of cells, typically forming 70 to 95% of cellular mass and it provides an environment for innumerable organisms to live in, since it covers 75% of Earth surface. Water is a simple molecule made of two hydrogen atoms and one oxygen atom, H2O. The small size of the molecule stands in contrast with its unique physical and chemical properties. Among those the fact that, at the triple point, liquid water is denser than ice is especially remarkable. Despite its special importance in life science, water is systematically removed from biological specimens investigated by electron microscopy. This is because the high vacuum of the electron microscope requires that the biological specimen is observed in dry conditions. For 50 years the science of electron microscopy has addressed this problem resulting in numerous preparation techniques, presently in routine use. Typically these techniques consist in fixing the sample (chemically or by freezing), replacing its water by plastic which is transformed into rigid block by polymerisation. The block is then cut into thin sections (c. 50 nm) with an ultra-microtome at room temperature. Usually, these techniques introduce several artefacts, most of them due to water removal. In order to avoid these artefacts, the specimen can be frozen, cut and observed at low temperature. However, liquid water crystallizes into ice upon freezing, thus causing severe damage. Ideally, liquid water is solidified into a vitreous state. Vitrification consists in solidifying water so rapidly that ice crystals have no time to form. A breakthrough took place when vitrification of pure water was discovered. Since this discovery, the thin film vitrification method is used with success for the observation of biological suspensions of. small particles. Our work was to extend the method to bulk biological samples that have to be vitrified, cryosectioned into vitreous sections and observed in cryo-electron microscope. This technique is called cryo-electron microscopy of vitreous sections (CEMOVIS). It is now believed to be the best way to preserve the ultrastructure of biological tissues and cells very close to the native state for electron microscopic observation. Since recently, CEMOVIS has become a practical method achieving excellent results. It has, however, some sever limitations, the most important of them certainly being due to cutting artefacts. They are the consequence of the nature of vitreous material and the fact that vitreous sections cannot be floated on a liquid as is the case for plastic sections cut at room temperature. The aim of the present work has been to improve our understanding of the cutting process and of cutting artefacts, thus finding optimal conditions to minimise or prevent these artefacts. An improved model of the cutting process and redefinitions of cutting artefacts are proposed. Results obtained with CEMOVIS under these conditions are presented and compared with results obtained with conventional methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The markets of biomass for energy are developing rapidly and becoming more international. A remarkable increase in the use of biomass for energy needs parallel and positive development in several areas, and there will be plenty of challenges to overcome. The main objective of the study was to clarify the alternative future scenarios for the international biomass market until the year 2020, and based on the scenario process, to identify underlying steps needed towards the vital working and sustainable biomass market for energy purposes. Two scenario processes were conducted for this study. The first was carried out with a group of Finnish experts and thesecond involved an international group. A heuristic, semi-structured approach, including the use of preliminary questionnaires as well as manual and computerised group support systems (GSS), was applied in the scenario processes.The scenario processes reinforced the picture of the future of international biomass and bioenergy markets as a complex and multi-layer subject. The scenarios estimated that the biomass market will develop and grow rapidly as well as diversify in the future. The results of the scenario process also opened up new discussion and provided new information and collective views of experts for the purposes of policy makers. An overall view resulting from this scenario analysis are the enormous opportunities relating to the utilisation of biomass as a resource for global energy use in the coming decades. The scenario analysis shows the key issues in the field: global economic growth including the growing need for energy, environmental forces in the global evolution, possibilities of technological development to solve global problems, capabilities of the international community to find solutions for global issues and the complex interdependencies of all these driving forces. The results of the scenario processes provide a starting point for further research analysing the technological and commercial aspects related the scenarios and foreseeing the scales and directions of biomass streams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design aspects of the Transversally Laminated Anisotropic (TLA) Synchronous Reluctance Motor (SynRM) are studied and the machine performance analysis compared to the Induction Motor (IM) is done. The SynRM rotor structure is designed and manufactured for a30 kW, four-pole, three-phase squirrel cage induction motor stator. Both the IMand SynRM were supplied by a sensorless Direct Torque Controlled (DTC) variablespeed drive. Attention is also paid to the estimation of the power range where the SynRM may compete successfully with a same size induction motor. A technicalloss reduction comparison between the IM and SynRM in variable speed drives is done. The Finite Element Method (FEM) is used to analyse the number, location and width of flux barriers used in a multiple segment rotor. It is sought for a high saliency ratio and a high torque of the motor. It is given a comparison between different FEM calculations to analyse SynRM performance. The possibility to take into account the effect of iron losses with FEM is studied. Comparison between the calculated and measured values shows that the design methods are reliable. A new application of the IEEE 112 measurement method is developed and used especially for determination of stray load losses in laboratory measurements. The study shows that, with some special measures, the efficiency of the TLA SynRM is equivalent to that of a high efficiency IM. The power factor of the SynRM at rated load is smaller than that of the IM. However, at lower partial load this difference decreases and this, probably, brings that the SynRM gets a better power factor in comparison with the IM. The big rotor inductance ratio of the SynRM allows a good estimating of the rotor position. This appears to be very advantageous for the designing of the rotor position sensor-less motor drive. In using the FEM designed multi-layer transversally laminated rotor with damper windings it is possible to design a directly network driven motor without degrading the motorefficiency or power factor compared to the performance of the IM.