841 resultados para physically-based model
Resumo:
Calcium fluoride (CaF2) is one of the key lens materials in deep-ultraviolet microlithography because of its transparency at 193 nm and its nearly perfect optical isotropy. Its physical and chemical properties make it applicable for lens fabrication. The key feature of CaF2 is its extreme laser stability. rnAfter exposing CaF2 to 193 nm laser irradiation at high fluences, a loss in optical performance is observed, which is related to radiation-induced defect structures in the material. The initial rapid damage process is well understood as the formation of radiation-induced point defects, however, after a long irradiation time of up to 2 months, permanent damage of the crystals is observed. Based on experimental results, these permanent radiation-induced defect structures are identified as metallic Ca colloids.rnThe properties of point defects in CaF2 and their stabilization in the crystal bulk are calculated with density functional theory (DFT). Because the stabilization of the point defects and the formation of metallic Ca colloids are diffusion-driven processes, the diffusion coefficients for the vacancy (F center) and the interstitial (H center) in CaF2 are determined with the nudged elastic band method. The optical properties of Ca colloids in CaF2 are obtained from Mie-theory, and their formation energy is determined.rnBased on experimental observations and the theoretical description of radiation-induced point defects and defect structures, a diffusion-based model for laser-induced material damage in CaF2 is proposed, which also includes a mechanism for annealing of laser damage. rn
Resumo:
L’oggetto del lavoro si concentra sull’analisi in chiave giuridica del modello di cooperazione in rete tra le autorità nazionali degli Stati membri nel quadro dello Spazio LSG, allo scopo di valutarne il contributo, le prospettive e il potenziale. La trattazione si suddivide in due parti, precedute da una breve premessa teorica incentrata sull’analisi della nozione di rete e la sua valenza giuridica. La prima parte ricostruisce il percorso di maturazione della cooperazione in rete, dando risalto tanto ai fattori di ordine congiunturale quanto ai fattori giuridici e d’ordine strutturale che sono alla base del processo di retificazione dei settori giustizia e sicurezza. In particolare, vengono elaborati taluni rilievi critici, concernenti l’operatività degli strumenti giuridici che attuano il principio di mutuo riconoscimento e di quelli che danno applicazione al principio di disponibilità delle informazioni. Ciò allo scopo di evidenziare gli ostacoli che, di frequente, impediscono il buon esito delle procedure di cooperazione e di comprendere le potenzialità e le criticità derivanti dall’utilizzo della rete rispetto alla concreta applicazione di tali procedure. La seconda parte si focalizza sull’analisi delle principali reti attive in materia di giustizia e sicurezza, con particolare attenzione ai rispettivi meccanismi di funzionamento. La trattazione si suddivide in due distinte sezioni che si concentrano sulle a) reti che operano a supporto dell’applicazione delle procedure di assistenza giudiziaria e degli strumenti di mutuo riconoscimento e sulle b) reti che operano nel settore della cooperazione informativa e agevolano lo scambio di informazioni operative e tecniche nelle azioni di prevenzione e lotta alla criminalità - specialmente nel settore della protezione dell’economia lecita. La trattazione si conclude con la ricostruzione delle caratteristiche di un modello di rete europea e del ruolo che questo esercita rispetto all’esercizio delle competenze dell’Unione Europea in materia di giustizia e sicurezza.
Resumo:
Persons affected by Down Syndrome show a heterogeneous phenotype that includes developmental defects and cognitive and haematological disorders. Premature accelerated aging and the consequent development of age associated diseases like Alzheimer Disease (AD) seem to be the cause of higher mortality late in life of DS persons. Down Syndrome is caused by the complete or partial trisomy of chromosome 21, but it is not clear if the molecular alterations of the disease are triggered by the specific functions of a limited number of genes on chromosome 21 or by the disruption of genetic homeostasis due the presence of a trisomic chromosome. As epigenomic studies can help to shed light on this issue, here we used the Infinium HumanMethilation450 BeadChip to analyse blood DNA methylation patterns of 29 persons affected by Down syndrome (DSP), using their healthy siblings (DSS) and mothers (DSM) as controls. In this way we obtained a family-based model that allowed us to monitor possible confounding effects on DNA methylation patterns deriving from genetic and environmental factors. We showed that defects in DNA methylation map in genes involved in developmental, neurological and haematological pathways. These genes are enriched on chromosome 21 but localize also in the rest of the genome, suggesting that the trisomy of specific genes on chromosome 21 induces a cascade of events that engages many genes on other chromosomes and results in a global alteration of genomic function. We also analysed the methylation status of three target regions localized at the promoter (Ribo) and at the 5’ sequences of 18S and 28S regions of the rDNA, identifying differently methylated CpG sites. In conclusion, we identified an epigenetic signature of Down Syndrome in blood cells that sustains a link between developmental defects and disease phenotype, including segmental premature aging.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.
Resumo:
We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.
Resumo:
Zur Registrierung von Pharmazeutika ist eine umfassende Analyse ihres genotoxischen Potentials von Nöten. Aufgrund der Vielzahl genotoxischer Mechanismen und deren resultierenden Schäden wird ein gestaffeltes Testdesign durch die ICH-Richtlinie S2(R1) „Guidance on genotoxicity testing and data interpretation for pharmaceuticals intended for human use S2(R1)“ definiert, um alle genotoxischen Substanzen zu identifizieren. Die Standardtestbatterie ist in der frühen Phase der Arzneimittelentwicklung aufgrund des geringen Durchsatzes und des Mangels an verfügbarer Substanzmenge vermindert anwendbar. Darüber hinaus verfügen in vitro Genotoxizitätstests in Säugerzellen über eine relativ geringe Spezifität. Für eine vollständige Sicherheitsbeurteilung wird eine in vivo Testung auf Kanzerogenität benötigt. Allerdings sind diese Testsysteme kosten- und zeitintensiv. Aufgrund dessen zielen neue Forschungsansätze auf die Verbesserung der Prädiktivität und die Erfassung des genotoxischen Potentials bereits in der frühen Phase der Arzneimittelentwicklung ab. Die high content imaging (HCI)-Technologie offeriert einen Ansatz zur Verbesserung des Durchsatzes verglichen mit der Standardtestbatterie. Zusätzlich hat ein Zell-basiertes Modell den Vorteil Daten relativ schnell bei gleichzeitig geringem Bedarf an Substanzmenge zu generieren. Demzufolge ermöglichen HCI-basierte Testsysteme eine Prüfung in der frühen Phase der pharmazeutischen Arzneimittelentwicklung. Das Ziel dieser Studie ist die Entwicklung eines neuen, spezifischen und sensitiven HCI-basierten Testsytems für Genotoxine und Progenotoxine in vitro unter Verwendung von HepG2-Zellen gewesen. Aufgrund ihrer begrenzten metabolischen Kapazität wurde ein kombiniertes System bestehend aus HepG2-Zellen und einem metabolischen Aktivierungssystem zur Testung progenotoxischer Substanzen etabliert. Basierend auf einer vorherigen Genomexpressionsprofilierung (Boehme et al., 2011) und einer Literaturrecherche wurden die folgenden neun unterschiedlichen Proteine der DNA-Schadensantwort als putative Marker der Substanz-induzierten Genotoxizität ausgewählt: p-p53 (Ser15), p21, p-H2AX (Ser139), p-Chk1 (Ser345) p-ATM (Ser1981), p-ATR (Ser428), p-CDC2 (Thr14/Tyr15), GADD45A und p-Chk2 (Thr68). Die Expression bzw. Aktivierung dieser Proteine wurde 48 h nach Behandlung mit den (pro-) genotoxischen Substanzen (Cyclophosphamid, 7,12-Dimethylbenz[a]anthracen, Aflatoxin B1, 2-Acetylaminofluoren, Methylmethansulfonat, Actinomycin D, Etoposid) und den nicht-genotoxischen Substanzen (D-Mannitol, Phenforminhydrochlorid, Progesteron) unter Verwendung der HCI-Technologie ermittelt. Die beste Klassifizierung wurde bei Verwendung der folgenden fünf der ursprünglichen neun putativen Markerproteine erreicht: p-p53 (Ser15), p21, p-H2AX (Ser139), p-Chk1 (Ser345) und p-ATM (Ser1981). In einem zweiten Teil dieser Arbeit wurden die fünf ausgewählten Proteine mit Substanzen, welche von dem European Centre for the Validation of Alternative Methods (ECVAM) zur Beurteilung der Leistung neuer oder modifizierter in vitro Genotoxizitätstests empfohlen sind, getestet. Dieses neue Testsystem erzielte eine Sensitivität von 80 % und eine Spezifität von 86 %, was in einer Prädiktivität von 84 % resultierte. Der synergetische Effekt dieser fünf Proteine ermöglicht die Identifizierung von genotoxischen Substanzen, welche DNA-Schädigungen durch eine Vielzahl von unterschiedlichen Mechanismen induzieren, mit einem hohen Erfolg. Zusammenfassend konnte ein hochprädiktives Prüfungssystem mit metabolischer Aktivierung für ein breites Spektrum potenziell genotoxischer Substanzen generiert werden, welches sich aufgrund des hohen Durchsatzes, des geringen Zeitaufwandes und der geringen Menge benötigter Substanz zur Substanzpriorisierung und -selektion in der Phase der Leitstrukturoptimierung eignet und darüber hinaus mechanistische Hinweise auf die genotoxische Wirkung der Testsubstanz liefert.
Resumo:
Ein charakteristisches, neuropathologisches Merkmal der Alzheimer-Demenz (AD), der am häufigsten vorkommenden Demenz-Form des Menschen, ist das Auftreten von senilen Plaques im Gehirn der Patienten. Hierbei stellt das neurotoxische A-beta Peptid den Hauptbestandteil dieser Ablagerungen dar. Einen Beitrag zu der pathologisch erhöhten A-beta Generierung liefert das verschobene Expressionsgleichgewicht der um APP-konkurrierenden Proteasen BACE-1 und ADAM10 zu Gunsten der beta-Sekretase BACE-1. In der vorliegenden Dissertation sollten molekulare Mechanismen identifiziert werden, die zu einem pathologisch veränderten Gleichgewicht der APP-Spaltung und somit zum Entstehen und Fortschritt der AD beitragen. Des Weiteren sollten Substanzen identifiziert werden, die durch Beeinflussung der Genexpression einer der beiden Proteasen das physiologische Gleichgewicht der APP-Prozessierung wiederherstellen können und somit therapeutisch einsetzbar sind.rnAnhand eines „Screenings“ von 704 Transkriptionsfaktoren wurden 23 Faktoren erhalten die das Verhältnis ADAM10- pro BACE-1-Promotor Aktivität beeinflussten. Exemplarisch wurden zwei der molekularen Faktoren auf ihren Wirkmechanismus untersucht: Der TF „X box binding protein-1“ (XBP-1), der die so genannte „unfolded protein response“ (UPR) reguliert, erhöhte die Expression von ADAM10 in Zellkultur-Experimenten. Die Menge dieses Faktors war in AD-Patienten im Vergleich zu gesunden, Alters-korrelierten Kontrollen signifikant erniedrigt. Im Gegensatz dazu verminderte der Seneszenz-assoziierte TF „T box 2“ (Tbx2) die Menge an ADAM10 in SH-SY5Y Zellen. Die Expression des Faktors selbst war in post-mortem Kortexgewebe von AD-Patienten erhöht. Zusätzlich zu den TFs konnten in einer Kooperation mit dem Helmholtz Zentrum München drei microRNAs (miRNA 103, 107, 1306) bioinformatisch prädiziert und experimentell validiert werden, die die Expression des humanen ADAM10 reduzierten.rnIm Rahmen dieser Arbeit konnten damit körpereigene Faktoren identifiziert werden, die die Menge an ADAM10 regulieren und folglich potenziell an der Entstehung der gestörten Homöostase der APP-Prozessierung beteiligt sind. Somit ist die AD auch im Hinblick auf eine A-beta-vermittelte Pathologie als multifaktorielle Krankheit zu verstehen, in der verschiedene Regulatoren zur gestörten APP-Prozessierung und somit zur pathologisch gesteigerten A-beta Generierung beitragen können. rnEine pharmakologische Erhöhung der ADAM10 Genexpression würde zu der Freisetzung von neuroprotektivem APPs-alpha und gleichzeitig zu einer reduzierten A-beta Generierung führen. Deshalb war ein weiteres Ziel dieser Arbeit die Evaluierung von Substanzen mit therapeutischem Potenzial im Hinblick auf eine erhöhte ADAM10 Expression. Von 640 FDA-zugelassenen Medikamenten einer Substanz-Bibliothek wurden 23 Substanzen identifiziert, die die Menge an ADAM10 signifikant steigerten während die Expression von BACE-1 und APP unbeeinflusst blieb. In Zusammenarbeit mit dem Institut für Pathologie (Johannes Gutenberg Universität Mainz) wurde ein Zellkultur-basiertes Modell etabliert, um die Permeationsfähigkeit der potenziellen Kandidaten-Substanzen über die Blut-Hirn Schranke (BHS) zu untersuchen. Von den 23 Medikamenten konnten neun im Rahmen des etablierten Modells als BHS-gängig charakterisiert werden. Somit erfüllen diese verbleibenden Medikamente die grundlegenden Anforderungen an ein AD-Therapeutikum. rnADAM10 spaltet neben APP eine Vielzahl anderer Substrate mit unterschiedlichen Funktionen in der Zelle. Zum Beispiel reguliert das Zelladhäsionsmolekül Neuroligin-1 (NL-1), das von ADAM10 prozessiert wird, die synaptische Funktion exzitatorischer Neurone. Aus diesem Grund ist die Abschätzung potenzieller, Therapie-bedingter Nebenwirkungen sehr wichtig. Im Rahmen eines Forschungsaufenthalts an der Universität von Tokio konnte in primären, kortikalen Neuronen der Ratte bei einer Retinoid-induzierten Erhöhung von ADAM10 neben einer vermehrten alpha-sekretorischen APP-Prozessierung auch eine gesteigerte Spaltung von NL-1 beobachtet werden. Dies lässt vermuten, dass bei einer Behandlung mit dem Retinoid Acitretin neben einer vermehrten APP-Spaltung durch ADAM10 auch die Regulation glutamaterger Neurone durch die Spaltung von NL-1 betroffen ist. Anhand eines geeigneten Alzheimer-Tiermodells sollten diese Befunde weiter analysiert werden, um so auf einen sicheren therapeutischen Ansatz bezüglich einer vermehrten ADAM10 Genexpression schließen zu können.rn
Resumo:
Experimental measurements are used to characterize the anisotropy of flow stress in extruded magnesium alloy AZ31 sheet during uniaxial tension tests at temperatures between 350°C and 450°C, and strain rates ranging from 10-5 to 10-2 s-1. The sheet exhibits lower flow stress and higher tensile ductility when loaded with the tensile axis perpendicular to the extrusion direction compared to when it is loaded parallel to the extrusion direction. This anisotropy is found to be grain size, strain rate, and temperature dependent, but is only weakly dependent on texture. A microstructure based model (D. E. Cipoletti, A. F. Bower, P. E. Krajewski, Scr. Mater., 64 (2011) 931–934) is used to explain the origin of the anisotropic behavior. In contrast to room temperature behavior, where anisotropy is principally a consequence of the low resistance to slip on the basal slip system, elevated temperature anisotropy is found to be caused by the grain structure of extruded sheet. The grains are elongated parallel to the extrusion direction, leading to a lower effective grain size perpendicular to the extrusion direction. As a result, grain boundary sliding occurs more readily if the material is loaded perpendicular to the extrusion direction.
Resumo:
Soil erosion models and soil erosion risk maps are often used as indicators to assess potential soil erosion in order to assist policy decisions. This paper shows the scientific basis of the soil erosion risk map of Switzerland and its application in policy and practice. Linking a USLE/RUSLE-based model approach (AVErosion) founded on multiple flow algorithms and the unit contributing area concept with an extremely precise and high-resolution digital terrain model (2 m × 2 m grid) using GIS allows for a realistic assessment of the potential soil erosion risk, on single plots, i.e. uniform and comprehensive for the agricultural area of Switzerland (862,579 ha in the valley area and the lower mountain regions). The national or small-scale soil erosion prognosis has thus reached a level heretofore possible only in smaller catchment areas or single plots. Validation was carried out using soil loss data from soil erosion damage mappings in the field from long-term monitoring in different test areas. 45% of the evaluated agricultural area of Switzerland was classified as low potential erosion risk, 12% as moderate potential erosion risk, and 43% as high potential erosion risk. However, many of the areas classified as high potential erosion risk are located at the transition from valley to mountain zone, where many areas are used as permanent grassland, which drastically lowers their current erosion risk. The present soil erosion risk map serves on the one hand to identify and prioritise the high-erosion risk areas, and on the other hand to promote awareness amongst farmers and authorities. It was published on the internet and will be made available to the authorities in digital form. It is intended as a tool for simplifying and standardising enforcement of the legal framework for soil erosion prevention in Switzerland. The work therefore provides a successful example of cooperation between science, policy and practice.
Resumo:
The effect of shot particles on the high temperature, low cycle fatigue of a hybrid fiber/particulate metal-matrix composite (MMC) was studied. Two hybrid composites with the general composition A356/35%SiC particle/5%Fiber (one without shot) were tested. It was found that shot particles acting as stress concentrators had little effect on the fatigue performance. It appears that fibers with a high silica content were more likely to debond from the matrix. Final failure of the composite was found to occur preferentially in the matrix. SiC particles fracture progressively during fatigue testing, leading to higher stress in the matrix, and final failure by matrix overload. A continuum mechanics based model was developed to predict failure in fatigue based on the tensile properties of the matrix and particles. By accounting for matrix yielding and recovery, composite creep and particle strength distribution, failure of the composite was predicted.
Resumo:
Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.
Resumo:
Heterogeneous materials are ubiquitous in nature and as synthetic materials. These materials provide unique combination of desirable mechanical properties emerging from its heterogeneities at different length scales. Future structural and technological applications will require the development of advanced light weight materials with superior strength and toughness. Cost effective design of the advanced high performance synthetic materials by tailoring their microstructure is the challenge facing the materials design community. Prior knowledge of structure-property relationships for these materials is imperative for optimal design. Thus, understanding such relationships for heterogeneous materials is of primary interest. Furthermore, computational burden is becoming critical concern in several areas of heterogeneous materials design. Therefore, computationally efficient and accurate predictive tools are highly essential. In the present study, we mainly focus on mechanical behavior of soft cellular materials and tough biological material such as mussel byssus thread. Cellular materials exhibit microstructural heterogeneity by interconnected network of same material phase. However, mussel byssus thread comprises of two distinct material phases. A robust numerical framework is developed to investigate the micromechanisms behind the macroscopic response of both of these materials. Using this framework, effect of microstuctural parameters has been addressed on the stress state of cellular specimens during split Hopkinson pressure bar test. A voronoi tessellation based algorithm has been developed to simulate the cellular microstructure. Micromechanisms (microinertia, microbuckling and microbending) governing macroscopic behavior of cellular solids are investigated thoroughly with respect to various microstructural and loading parameters. To understand the origin of high toughness of mussel byssus thread, a Genetic Algorithm (GA) based optimization framework has been developed. It is found that two different material phases (collagens) of mussel byssus thread are optimally distributed along the thread. These applications demonstrate that the presence of heterogeneity in the system demands high computational resources for simulation and modeling. Thus, Higher Dimensional Model Representation (HDMR) based surrogate modeling concept has been proposed to reduce computational complexity. The applicability of such methodology has been demonstrated in failure envelope construction and in multiscale finite element techniques. It is observed that surrogate based model can capture the behavior of complex material systems with sufficient accuracy. The computational algorithms presented in this thesis will further pave the way for accurate prediction of macroscopic deformation behavior of various class of advanced materials from their measurable microstructural features at a reasonable computational cost.
Resumo:
Time-averaged discharge rates (TADR) were calculated for five lava flows at Pacaya Volcano (Guatemala), using an adapted version of a previously developed satellite-based model. Imagery acquired during periods of effusive activity between the years 2000 and 2010 were obtained from two sensors of differing temporal and spatial resolutions; the Moderate Resolution Imaging Spectroradiometer (MODIS), and the Geostationary Operational Environmental Satellites (GOES) Imager. A total of 2873 MODIS and 2642 GOES images were searched manually for volcanic “hot spots”. It was found that MODIS imagery, with superior spatial resolution, produced better results than GOES imagery, so only MODIS data were used for quantitative analyses. Spectral radiances were transformed into TADR via two methods; first, by best-fitting some of the parameters (i.e. density, vesicularity, crystal content, temperature change) of the TADR estimation model to match flow volumes previously estimated from ground surveys and aerial photographs, and second by measuring those parameters from lava samples to make independent estimates. A relatively stable relationship was defined using the second method, which suggests the possibility of estimating lava discharge rates in near-real-time during future volcanic crises at Pacaya.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
This document will demonstrate the methodology used to create an energy and conductance based model for power electronic converters. The work is intended to be a replacement for voltage and current based models which have limited applicability to the network nodal equations. Using conductance-based modeling allows direct application of load differential equations to the bus admittance matrix (Y-bus) with a unified approach. When applied directly to the Y-bus, the system becomes much easier to simulate since the state variables do not need to be transformed. The proposed transformation applies to loads, sources, and energy storage systems and is useful for DC microgrids. Transformed state models of a complete microgrid are compared to experimental results and show the models accurately reflect the system dynamic behavior.