954 resultados para Eigenfunctions and fundamental solution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The traditional lime mortar is composed of hydrated lime, sand and water. Besides these constituents it may also contain additives aiming to modify fresh mortar´s properties and/or to improve hardened mortar´s strength and durability. Already in the first civilizations various additives were used to enhance mortar´s quality, among the organic additives, linseed oil was one of the most common. From literature we know that it was used already in Roman period to reduce water permeability of a mortar, but the mechanism and the technology, e.g. effects of different dosages, are not clearly explained. There are only few works studying the effect of oil experimentally. Knowing the function of oil in historical mortars is important for designing a new compatible repair mortar. Moreover, linseed oil addition could increase the sometimes insufficient durability of lime-based mortars used for reparation and it could be a natural alternative to synthetic additives. In the present study, the effect of linseed oil on the properties of six various lime-based mortars has been studied. Mortars´ compositions have been selected with respect to composition of historical mortars, but also mortars used in a modern restoration practise have been tested. Oil was added in two different concentrations – 1% and 3% by the weight of binder. The addition of 1% of linseed oil has proved to have positive effect on mortars´ properties. It improves mechanical characteristics and limits water absorption into mortar without affecting significantly the total open porosity or decreasing the degree of carbonation. On the other hand, the 3% addition of linseed oil is making mortar to be almost hydrophobic, but it markedly decreases mortars´ strength. However, all types of tested lime-based mortars with the oil addition showed significantly decreased water and salt solution absorption by capillary rise. Addition of oil into mortars is also decreasing the proportion of pores which are easily accessible to water. Furthermore, mortars with linseed oil showed significantly improved resistance to salt crystallization and freeze-thaw cycles. On the base of the obtained results, the addition of 1% of linseed oil can be taken into consideration in the design of mortars meant to repair or replace historic mortars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der 'gestopfte Hochquarz' ß-Eukryptit (LiAlSiO4) ist bekannt für seine außergewöhnliche anisotrope Li-Ionenleitfähigkeit und die nahe Null liegende thermische Ausdehnung.Untersucht wurde die temperaturabhängige ß-Eukryptit-Phasenabfolge, insbesondere die modulierte Phase. Deren Satellitenreflexe sind gegenüber den normalen Reflexen erheblich verbreitert, überlappen miteinander sowie mit den dazwischen liegenden 'a-Reflexen' zu Tripletts. Für die Separation der Triplett-Intensitäten waren bisherige Standardverfahren zur Beugungsdatensammlung ungeeignet. Mit 'axialen q-Scans' wurde ein neuartiges Verfahren entwickelt. Intensitäten wurden mit dem neu-entwickelten least squares-Programm GKLS aus 2000 Profilen seriell und automatisch gewonnen und erfolgreich auf Standarddaten skaliert. Die Verwendung verbreiterter Reflexprofile erwies sich als zulässig.Die Verbreiterung wurde auf eine verminderte Fernordnung der Modulation von 11 bis 16 Perioden zurückgeführt (Analyse mit der Gitterfunktion), womit ein ungewöhnliches beugungswinkelabhängiges Verhalten der Reflexbreiten korrespondiert und mit typischen Antiphasendomänendurchmessern (andere Autoren) korreliert.Eine verminderte Si-/ Al-Ordnung wird als ursächlich für geringe Domänengrößen und Fernordnung angesehen, sowie für Eigenschaften wie z.B. a/c-Verhältnisse, Ausdehnungskoeffizienten, Ionenleitfähigkeit, Strukturtyp und Umwandlungstemperaturen. Änderungen des SiO2-Gehaltes, der Temperatur oder der Si- /Al-Ordnung zeitigen für einige Eigenschaften ähnliche Wirkungen.Die gemittelte Struktur der modulierten Phase wurde erstmals zuverlässig bestimmt, die Rolle des Li charakterisiert, Zweifel an der hexagonalen Symmetrie des ß-Eukryptits wurden ausgeräumt und die Bestimmung der modulierten Struktur wurde weitgehend vorbereitet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der Bedarf an hyperpolarisiertem 3He in Medizin und physikalischer Grundlagenforschung ist in den letzten ca. 10-15 Jahren sowohl in Bezug auf die zu Verfügung stehende Menge, als auch auf den benötigten Grad der Kernspinpolarisation stetig gestiegen. Gleichzeitig mußten Lösungen für die polarisationserhaltende Speicherung und den Transport gefunden werden, die je nach Anwendung anzupassen waren. Als Ergebnis kann mit dieser Arbeit ein in sich geschlossenes Gesamtkonzept vorgestellt werden, daß sowohl die entsprechenden Mengen für klinische Anwendungen, als auch höchste Polarisation für physikalische Grundlagenfor-schung zur Verfügung stellen kann. Verschiedene unabhängige Polarimetriemethoden zeigten in sich konsistente Ergebnisse und konnten, neben ihrer eigenen Weiterentwicklung, zu einer verläßlichen Charakterisierung des neuen Systems und auch der Transportzellen und –boxen eingesetzt werden. Die Polarisation wird mittels „Metastabilem Optischen Pumpen“ bei einem Druck von 1 mbar erzeugt. Dabei werden ohne Gasfluß Werte von P = 84% erreicht. Im Flußbetrieb sinkt die erreichbare Polarisation auf P ≈ 77%. Das 3He kann dann weitgehend ohne Polarisationsver-luste auf mehrere bar komprimiert und zu den jeweiligen Experimenten transportiert werden. Durch konsequente Weiterentwicklung der vorgestellten Polarisationseinheit an fast allen Komponenten kann somit jetzt bei einem Fluß von 0,8 barl/h eine Polarisation von Pmax = 77% am Auslaß der Apparatur erreicht werden. Diese skaliert linear mit dem Fluß, sodaß bei 3 barl/h die Polarisation immer noch bei ca. 60% liegt. Dabei waren die im Rahmen dieser Arbeit durchgeführten Verbesserungen an den Lasern, der Optik, der Kompressionseinheit, dem Zwischenspeicher und der Gasreinigung wesentlich für das Erreichen dieser Polarisatio-nen. Neben dem Einsatz eines neuen Faserlasersystems ist die hohe Gasreinheit und die lang-lebige Kompressionseinheit ein Schlüssel für diese Leistungsfähigkeit. Seit Herbst 2001 er-zeugte das System bereits über 2000 barl hochpolarisiertes 3He und ermöglichte damit zahl-reiche interdisziplinäre Experimente und Untersuchungen. Durch Verbesserungen an als Prototypen bereits vorhandenen Transportboxen und durch weitgehende Unterdrückung der Wandrelaxation in den Transportgefäßen aufgrund neuer Erkenntnisse über deren Ursachen stellen auch polarisationserhaltende Transporte über große Strecken kein Problem mehr dar. In unbeschichteten 1 Liter Kolben aus Aluminosilikatglä-sern werden nun problemlos Speicherzeiten von T1 > 200h erreicht. Im Rahmen des europäi-schen Forschungsprojektes „Polarized Helium to Image the Lung“ wurden während 19 Liefe-rungen 70barl 3He nach Sheffield (UK) und bei 13 Transporten 100 barl nach Kopenhagen (DK) per Flugzeug transportiert. Zusammenfassend konnte gezeigt werden, daß die Problematik der Kernspinpolarisationser-zeugung von 3He, die Speicherung, der Transport und die Verwendung des polarisierten Ga-ses in klinischer Diagnostik und physikalischen Grundlagenexperimenten weitgehend gelöst ist und das Gesamtkonzept die Voraussetzungen für allgemeine Anwendungen auf diesen Gebieten geschaffen hat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we are concerned with the analysis and numerical solution of Black-Scholes type equations arising in the modeling of incomplete financial markets and an inverse problem of determining the local volatility function in a generalized Black-Scholes model from observed option prices. In the first chapter a fully nonlinear Black-Scholes equation which models transaction costs arising in option pricing is discretized by a new high order compact scheme. The compact scheme is proved to be unconditionally stable and non-oscillatory and is very efficient compared to classical schemes. Moreover, it is shown that the finite difference solution converges locally uniformly to the unique viscosity solution of the continuous equation. In the next chapter we turn to the calibration problem of computing local volatility functions from market data in a generalized Black-Scholes setting. We follow an optimal control approach in a Lagrangian framework. We show the existence of a global solution and study first- and second-order optimality conditions. Furthermore, we propose an algorithm that is based on a globalized sequential quadratic programming method and a primal-dual active set strategy, and present numerical results. In the last chapter we consider a quasilinear parabolic equation with quadratic gradient terms, which arises in the modeling of an optimal portfolio in incomplete markets. The existence of weak solutions is shown by considering a sequence of approximate solutions. The main difficulty of the proof is to infer the strong convergence of the sequence. Furthermore, we prove the uniqueness of weak solutions under a smallness condition on the derivatives of the covariance matrices with respect to the solution, but without additional regularity assumptions on the solution. The results are illustrated by a numerical example.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’invarianza spaziale dei parametri di un modello afflussi-deflussi può rivelarsi una soluzione pratica e valida nel caso si voglia stimare la disponibilità di risorsa idrica di un’area. La simulazione idrologica è infatti uno strumento molto adottato ma presenta alcune criticità legate soprattutto alla necessità di calibrare i parametri del modello. Se si opta per l’applicazione di modelli spazialmente distribuiti, utili perché in grado di rendere conto della variabilità spaziale dei fenomeni che concorrono alla formazione di deflusso, il problema è solitamente legato all’alto numero di parametri in gioco. Assumendo che alcuni di questi siano omogenei nello spazio, dunque presentino lo stesso valore sui diversi bacini, è possibile ridurre il numero complessivo dei parametri che necessitano della calibrazione. Si verifica su base statistica questa assunzione, ricorrendo alla stima dell’incertezza parametrica valutata per mezzo di un algoritmo MCMC. Si nota che le distribuzioni dei parametri risultano in diversa misura compatibili sui bacini considerati. Quando poi l’obiettivo è la stima della disponibilità di risorsa idrica di bacini non strumentati, l’ipotesi di invarianza dei parametri assume ancora più importanza; solitamente infatti si affronta questo problema ricorrendo a lunghe analisi di regionalizzazione dei parametri. In questa sede invece si propone una procedura di cross-calibrazione che viene realizzata adottando le informazioni provenienti dai bacini strumentati più simili al sito di interesse. Si vuole raggiungere cioè un giusto compromesso tra lo svantaggio derivante dall’assumere i parametri del modello costanti sui bacini strumentati e il beneficio legato all’introduzione, passo dopo passo, di nuove e importanti informazioni derivanti dai bacini strumentati coinvolti nell’analisi. I risultati dimostrano l’utilità della metodologia proposta; si vede infatti che, in fase di validazione sul bacino considerato non strumentato, è possibile raggiungere un buona concordanza tra le serie di portata simulate e osservate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Rahmen der vorliegenden Promotionsarbeit wird das wasserähnliche Lösungsmittel flüssiger Ammoniak verwendet, um die kinetisch instabilen Münzmetallsilyle in hohen Ausbeuten darzustellen. Als Ausgangsverbindungen dienten verschiedene basenfreie Kaliumsilanide, die sich in flüssigem Ammoniak gut lösen ohne nennenswert protolysiert zu werden. Gegenüber konventionellen Synthesen, die in organischen Lösungsmitteln durchgeführt werden, können statt Alkoholate die leicht zugänglichen Münzmetallhalogenide eingesetzt werden. Bei einer Stöchiometrie von 1:1 werden in Abhängigkeit der sterischen Anspruchs des Silylrestes, die cyclischen oder die Ammoniakate der dimeren Kupfer– bzw. Silbersilyle erhalten, während zwei Äquivalente basenfreien Kaliumsilanid und ein Äquivalent Münzmetallhalogenid zu den homologen Homocuprate, -argentate und -aurate führen. Zusätzlich wird die Auswirkung des unterschiedlich sterischen Anspruches der Silylliganden und des gebundenen Münzmetalls auf die Strukturparameter untersucht. Im zweiten Teil dieser Arbeit wird das dargestellte Aurat KAuHyp2 (Hyp = Si(SiMe3)3) mit Trimethylchlorsilan in verschiedenen organischen Lösungsmitteln umgesetzt. In Anhängigkeit von der Stöchiometrie, der Reaktionsdauer, der Temperatur und des verwendeten Lösungsmittels werden erstmalig eine Vielzahl neuer anionischer Goldsilylkomplexe erhalten, genannt sei die Verbindung [K2(Toluol)2][Au4Hyp4], welsches ein Au4-Tetraederskelett mit vier terminalen Hypersilylliganden besitzt. Vom besonderen Interesse ist die Reduktion des Golds. Bemerkenswert sind die zu beobachtenden Silizium-Silizium-Bindungsspaltungen bzw. -Bindungsmetathese bei Raumtemperatur, beispielsweise erkennbar an der Verbindung [K][Au5(Si(SiMe3)2)6]. Auf die Thematik dieser neuartigen strukturell interessanten anionischen Goldsilyle wird in dieser Arbeit näher eingegangen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La regolazione dei sistemi di propulsione a razzo a propellente solido (Solid Rocket Motors) ha da sempre rappresentato una delle principali problematiche legate a questa tipologia di motori. L’assenza di un qualsiasi genere di controllo diretto del processo di combustione del grano solido, fa si che la previsione della balistica interna rappresenti da sempre il principale strumento utilizzato sia per definire in fase di progetto la configurazione ottimale del motore, sia per analizzare le eventuali anomalie riscontrate in ambito sperimentale. Variazioni locali nella struttura del propellente, difettosità interne o eterogeneità nelle condizioni di camera posso dare origine ad alterazioni del rateo locale di combustione del propellente e conseguentemente a profili di pressione e di spinta sperimentali differenti da quelli previsti per via teorica. Molti dei codici attualmente in uso offrono un approccio piuttosto semplificato al problema, facendo per lo più ricorso a fattori correttivi (fattori HUMP) semi-empirici, senza tuttavia andare a ricostruire in maniera più realistica le eterogeneità di prestazione del propellente. Questo lavoro di tesi vuole dunque proporre un nuovo approccio alla previsione numerica delle prestazioni dei sistemi a propellente solido, attraverso la realizzazione di un nuovo codice di simulazione, denominato ROBOOST (ROcket BOOst Simulation Tool). Richiamando concetti e techiche proprie della Computer Grafica, questo nuovo codice è in grado di ricostruire in processo di regressione superficiale del grano in maniera puntuale, attraverso l’utilizzo di una mesh triangolare mobile. Variazioni locali del rateo di combustione posso quindi essere facilmente riprodotte ed il calcolo della balistica interna avviene mediante l’accoppiamento di un modello 0D non-stazionario e di uno 1D quasi-stazionario. L’attività è stata svolta in collaborazione con l’azienda Avio Space Division e il nuovo codice è stato implementato con successo sul motore Zefiro 9.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer-assisted translation (or computer-aided translation or CAT) is a form of language translation in which a human translator uses computer software in order to facilitate the translation process. Machine translation (MT) is the automated process by which a computerized system produces a translated text or speech from one natural language to another. Both of them are leading and promising technologies in the translation industry; it therefore seems important that translation students and professional translators become familiar with this relatively new types of technology. Whether used together, not only might these two different types of systems reduce translation time, but also lead to a further improvement in the field of translation technologies. The dissertation consists of four chapters. The first one surveys the chronological development of MT and CAT tools, the emergence of pre-editing, post-editing and controlled language and the very last frontiers in this sector. The second one provide a general overview on the four main CAT tools that are used nowadays and tested hereto. The third chapter is dedicated to the experimentations that have been conducted in order to analyze and evaluate the performance of the four integrated systems that are the core subject of this dissertation. Finally, the fourth chapter deals with the issue of terminological equivalence in interlinguistic translation. The purpose of this dissertation is not to provide an objective and definitive solution to the complex issues that arise at any time in the field of translation technologies, this aim being well away from being achieved, but to supply information about the limits and potentiality that are typical of those instruments which are now essential to any professional translator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When designing metaheuristic optimization methods, there is a trade-off between application range and effectiveness. For large real-world instances of combinatorial optimization problems out-of-the-box metaheuristics often fail, and optimization methods need to be adapted to the problem at hand. Knowledge about the structure of high-quality solutions can be exploited by introducing a so called bias into one of the components of the metaheuristic used. These problem-specific adaptations allow to increase search performance. This thesis analyzes the characteristics of high-quality solutions for three constrained spanning tree problems: the optimal communication spanning tree problem, the quadratic minimum spanning tree problem and the bounded diameter minimum spanning tree problem. Several relevant tree properties, that should be explored when analyzing a constrained spanning tree problem, are identified. Based on the gained insights on the structure of high-quality solutions, efficient and robust solution approaches are designed for each of the three problems. Experimental studies analyze the performance of the developed approaches compared to the current state-of-the-art.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple dependency between contact angle θ and velocity or surface tension has been predicted for the wetting and dewetting behavior of simple liquids. According to the hydrodynamic theory, this dependency was described by Cox and Voinov as θ ∼ Ca^(1/3) (Ca: Capillary number). For more complex liquids like surfactant solutions, this prediction is not directly given.rnHere I present a rotating drum setup for studying wetting/dewetting processes of surfactant solutions on the basis of velocity-dependent contact angle measurements. With this new setup I showed that surfactant solutions do not follow the predicted Cox-Voinov relation, but showed a stronger contact angle dependency on surface tension. All surfactants independent of their charge showed this difference from the prediction so that electrostatic interactions as a reason could be excluded. Instead, I propose the formation of a surface tension gradient close to the three-phase contact line as the main reason for the strong contact angle decrease with increasing surfactant concentration. Surface tension gradients are not only formed locally close to the three-phase contact line, but also globally along the air-liquid interface due to the continuous creation/destruction of the interface by the drum moving out of/into the liquid. By systematically hindering the equilibration routes of the global gradient along the interface and/or through the bulk, I was able to show that the setup geometry is also important for the wetting/dewetting of surfactant solutions. Further, surface properties like roughness or chemical homogeneity of the wetted/dewetted substrate influence the wetting/dewetting behavior of the liquid, i. e. the three-phase contact line is differently pinned on rough/smooth or homogeneous/inhomogeneous surfaces. Altogether I showed that the wetting/dewetting of surfactant solutions did not depend on the surfactant type (anionic, cationic, or non-ionic) but on the surfactant concentration and strength, the setup geometry, and the surface properties.rnSurfactants do not only influence the wetting/dewetting behavior of liquids, but also the impact behavior of drops on free-standing films or solutions. In a further part of this work, I dealt with the stability of the air cushion between drop and film/solution. To allow coalescence between drop and substrate, the air cushion has to vanish. In the presence of surfactants, the vanishing of the air is slowed down due to a change in the boundary condition from slip to no-slip, i. e. coalescence is suppressed or slowed down in the presence of surfactant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale simulations and analytical theory have been combined to obtain the nonequilibrium velocity distribution, f(v), of randomly accelerated particles in suspension. The simulations are based on an event-driven algorithm, generalized to include friction. They reveal strongly anomalous but largely universal distributions, which are independent of volume fraction and collision processes, which suggests a one-particle model should capture all the essential features. We have formulated this one-particle model and solved it analytically in the limit of strong damping, where we find that f (v) decays as 1/v for multiple decades, eventually crossing over to a Gaussian decay for the largest velocities. Many particle simulations and numerical solution of the one-particle model agree for all values of the damping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.