940 resultados para linear approximation method
Resumo:
Relaxor-Ferroelektrika sind wegen ihrer möglichen technischen Anwendungen und vom Standpunkt der Grundlagenforschung als Beispiel für ungeordnete Systeme von wissenschaftlichem Interesse. Trotz zahlreicher experimenteller Untersuchungen sind die mikroskopischen Ursachen ihrer Eigenschaften aber nach wie vor ungeklärt. Im Rahmen dieser Arbeit wurde das Relaxor-Ferroelektrikum Bleimagnesiumniobat-Bleititanat (PMN-10PT) mittels linearer und nichtlinearer dielektrischer Spektroskopie untersucht. Durch Anregung mit starken elektrischen Wechselfeldern konnten mit der Methode des nichtresonanten dielektrischen Lochbrennens frequenzselektiv einzelne spektrale Bereiche aus dem verbreiterten Relaxationsspektrum herausgegriffen und deren Rückrelaxation separat verfolgt werden. Die experimentellen Ergebnisse zeigten, daß eine langlebige dynamische Heterogenität der dipolaren Reorientierung existiert. Durch ihr ausgeprägt nichtergodisches Verhalten zeigen Relaxor-Ferroelektrika starke Alterungseffekte. Die Untersuchung des Alterungsverhaltens der dielektrischen Suszeptibilität zeigte, daß ein Gedächtnis für die bei einer Alterungstemperatur eingenommene Konfiguration bestehen bleibt, sofern die Temperatur nach einer unvollständigen isothermen Alterung nur um einige Grad abgesenkt oder erhöht wurde.Außerdem wurde die induzierte Polarisation bei stochastischen dielektrischen Anregungen mit elektrischen Feldern, die in sehr guter Näherung ein weißes Rauschen darstellten, untersucht. Über die Bildung der Kreuzkorrelationsfunktion zwischen Feld und Polarisation konnte die Impulsantwortfunktion des Systems berechnet werden.Die experimentellen Ergebnisse am Relaxor-Ferroelektrikum PMN-10PT können sehr gut mit einem Modell eines ungeordneten Ferroelektrikums erklärt werden, dessen Domänenwände unordnungsbedingt an sogenannten Pinning-Zentren festhaften.
Resumo:
ABSTRACT (italiano) Con crescente attenzione riguardo al problema della sicurezza di ponti e viadotti esistenti nei Paesi Bassi, lo scopo della presente tesi è quello di studiare, mediante la modellazione con Elementi Finiti ed il continuo confronto con risultati sperimentali, la risposta in esercizio di elementi che compongono infrastrutture del genere, ovvero lastre in calcestruzzo armato sollecitate da carichi concentrati. Tali elementi sono caratterizzati da un comportamento ed una crisi per taglio, la cui modellazione è, da un punto di vista computazionale, una sfida piuttosto ardua, a causa del loro comportamento fragile combinato a vari effetti tridimensionali. La tesi è incentrata sull'utilizzo della Sequentially Linear Analysis (SLA), un metodo di soluzione agli Elementi Finiti alternativo rispetto ai classici approcci incrementali e iterativi. Il vantaggio della SLA è quello di evitare i ben noti problemi di convergenza tipici delle analisi non lineari, specificando direttamente l'incremento di danno sull'elemento finito, attraverso la riduzione di rigidezze e resistenze nel particolare elemento finito, invece dell'incremento di carico o di spostamento. Il confronto tra i risultati di due prove di laboratorio su lastre in calcestruzzo armato e quelli della SLA ha dimostrato in entrambi i casi la robustezza del metodo, in termini di accuratezza dei diagrammi carico-spostamento, di distribuzione di tensioni e deformazioni e di rappresentazione del quadro fessurativo e dei meccanismi di crisi per taglio. Diverse variazioni dei più importanti parametri del modello sono state eseguite, evidenziando la forte incidenza sulle soluzioni dell'energia di frattura e del modello scelto per la riduzione del modulo elastico trasversale. Infine è stato effettuato un paragone tra la SLA ed il metodo non lineare di Newton-Raphson, il quale mostra la maggiore affidabilità della SLA nella valutazione di carichi e spostamenti ultimi insieme ad una significativa riduzione dei tempi computazionali. ABSTRACT (english) With increasing attention to the assessment of safety in existing dutch bridges and viaducts, the aim of the present thesis is to study, through the Finite Element modeling method and the continuous comparison with experimental results, the real response of elements that compose these infrastructures, i.e. reinforced concrete slabs subjected to concentrated loads. These elements are characterized by shear behavior and crisis, whose modeling is, from a computational point of view, a hard challenge, due to their brittle behavior combined with various 3D effects. The thesis is focused on the use of Sequentially Linear Analysis (SLA), an alternative solution technique to classical non linear Finite Element analyses that are based on incremental and iterative approaches. The advantage of SLA is to avoid the well-known convergence problems of non linear analyses by directly specifying a damage increment, in terms of a reduction of stiffness and strength in the particular finite element, instead of a load or displacement increment. The comparison between the results of two laboratory tests on reinforced concrete slabs and those obtained by SLA has shown in both the cases the robustness of the method, in terms of accuracy of load-displacements diagrams, of the distribution of stress and strain and of the representation of the cracking pattern and of the shear failure mechanisms. Different variations of the most important parameters have been performed, pointing out the strong incidence on the solutions of the fracture energy and of the chosen shear retention model. At last a confrontation between SLA and the non linear Newton-Raphson method has been executed, showing the better reliability of the SLA in the evaluation of the ultimate loads and displacements, together with a significant reduction of computational times.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
In der vorliegenden Dissertation dient ein einfaches Konzept zur Systematisierung der Suche nach neuen Materialien mit hoher Spinpolarisation. Dieses Konzept basiert auf zwei semi-empirischen Modellen. Zum einen kann die Slater-Pauling Regel zur Abschätzung magnetischer Momente verwendet werden. Dieses Modell wird dabei durch Rechnungen zur elektronischen Struktur unterstützt. Das zweites Modell kann insbesondere für die Co2YZ Heusler Verbindungen beim Vergleich ihrer magnetischen Eigenschaften gefunden werden. Für diese Verbindungen ergibt sich eine scheinbare lineare Abhängigkeit der Curie-Temperatur beim Auftragen als Funktion des magnetischen Momentes. Angeregt durch diese Modelle wurde die Heusler Verbindung Co2FeSi nochmals detailliert im Hinblick auf ihre geometrische und magnetische Struktur hin untersucht. Als Methoden dienten dabei die Pulver-Röntgenbeugung, die EXAFS Spektroskopie, Röntgen Absorptions- and Mößbauer Spektroskopie sowie Hoch- und Tieftemperatur Magnetometrie, XMCD and DSC. Die Messungen zeigten, dass es sich bei Co2FeSi um das Material mit dem höchsten magnetischen Moment (6 B) und der höchsten Curie Temperatur (1100 K) sowohl in der Klasse der Heusler Verbindungen als auch in der Klasse der halbmetallischen Ferromagnete handelt. Zusätzlich werden alle experimentellen Ergebnisse durch detaillierte Rechnungen zur elektronischen Struktur unterstützt. Die gleichen Konzepte wurden verwendet, um die Eigenschaften der Heusler Verbindung Co2Cr1-xFexAl vorherzusagen. Die elektronische Struktur und die spektroskopischen Eigenschaften wurden mit der voll-relativistischen Korringa-Kohn-Rostocker Methode berechnet, unter Verwendung kohärenter Potentialnäherungen um der zufälligen Verteilung von Cr und Fe Atomen sowie zufälliger Unordnung Rechnung zu tragen. Magnetische Effekte wurden durch die Verwendung Spin-abhängiger Potentiale im Rahmen der lokalen Spin-Dichte-Näherung mit eingeschlossen. Die strukturellen und chemischen Eigenschaften der quaternären Heusler Verbindung Co2Cr1-xFexAl wurden an Pulver und Bulkproben gemessen. Die Fernordnung wurde mit der Pulver Röntgenbeugung und Neutronenbeugung untersucht, während die Nahordnung mit der EXAFS Spektroskopie aufgeklärt wurde. Die magnetische Struktur von Pulver und Bulkproben wurde mitttels 57Fe-Mößbauer Spektroskopie gemessen. Die chemische Zusammensetzung wurde durch XPS analysiert. Die Ergebnisse dieser Methoden wurden verglichen, um eine Einsicht in die Unterschiede zwischen Oberflächen und Volumeneigenschaften zu erlangen sowie in das Auftreten von Fehlordnung in solchen Verbindungen. Zusätzlich wurde XMCD an den L3,2 Kanten von Co, Fe, and Cr gemessen, um die elementspezifischen magnetischen Momente zu bestimmen. Rechnungen und Messungen zeigen dabei eine Zunahme des magnetischen Momentes bei steigendem Fe-Anteil. Resonante Photoemission mit weicher Röntgenstrahlung sowie Hochenergie Photoemission mit harter Röntgenstrahlung wurden verwendet, um die Zustandsdichte der besetzten Zustände in Co2Cr0.6Fe0.4Al zu untersuchen. Diese Arbeit stellt außerdem eine weitere, neue Verbindung aus der Klasse der Heusler Verbindungen vor. Co2CrIn ist L21 geordnet, wie Messungen mittels Pulver Röntgenbeugung zeigen. Die magnetischen Eigenschaften wurden mit magnetometrisch bestimmt. Co2CrIn ist weichmagnetisch mit einer Sättigungsmagnetisierung von 1.2B bei 5 K. Im Gegensatz zu den bereits oben erwähnten Co2YZ Heusler Verbindungen ist Co2CrIn kein halbmetallischer Ferromagnet. Im Rahmen dieser Arbeit wird weiterhin eine Regel zur Vorhersage von halbmetallischen komplett kompensierten Ferrimagneten in der Klasse der Heusler Verbindungen vorgestellt. Dieses Konzept resultiert aus der Kombination der Slater-Pauling Regel mit der Kübler-Regel. Die Kübler Regel besagt, dass Mn auf der Y Position zu einem hoch lokalisierten magnetischen Moment tendiert. Unter Verwendung dieses neuen Konzeptes werden für einige Kandidaten in der Klasse der Heusler Verbindungen die Eigenschaft des halbmetallischen komplett kompensierten Ferrimagnetismus vorhergesagt. Die Anwendung dieses Konzeptes wird anhand von Rechnungen zur elektronischen Struktur bestätigt.
Resumo:
In dieser Arbeit wurde die elektromagnetische Pionproduktion unter der Annahme der Isospinsymmetrie der starken Wechselwirkung im Rahmen der manifest Lorentz-invarianten chiralen Störungstheorie in einer Einschleifenrechnung bis zur Ordnung vier untersucht. Dazu wurden auf der Grundlage des Mathematica-Pakets FeynCalc Algorithmen zur Berechnung der Pionproduktionsamplitude entwickelt. Bis einschließlich der Ordnung vier tragen insgesamt 105 Feynmandiagramme bei, die sich in 20 Baumdiagramme und 85 Schleifendiagramme unterteilen lassen. Von den 20 Baumdiagrammen wiederum sind 16 als Polterme und vier als Kontaktgraphen zu klassifizieren; bei den Schleifendiagrammen tragen 50 Diagramme ab der dritten Ordnung und 35 Diagramme ab der vierten Ordnung bei. In der Einphotonaustauschnäherung lässt sich die Pionproduktionsamplitude als ein Produkt des Polarisationsvektors des (virtuellen) Photons und des Übergangsstrommatrixelements parametrisieren, wobei letzteres alle Abhängigkeiten der starken Wechselwirkung beinhaltet und wo somit die chirale Störungstheorie ihren Eingang findet. Der Polarisationsvektor hingegen hängt von dem leptonischen Vertex und dem Photonpropagator ab und ist aus der QED bekannt. Weiterhin lässt sich das Übergangsstrommatrixelement in sechs eichinvariante Amplituden zerlegen, die sich im Rahmen der Isospinsymmetrie jeweils wiederum in drei Isospinamplituden zerlegen lassen. Linearkombinationen dieser Isospinamplituden erlauben letztlich die Beschreibung der physikalischen Amplituden. Die in dieser Rechnung auftretenden Einschleifenintegrale wurden numerisch mittels des Programms LoopTools berechnet. Im Fall tensorieller Integrale erfolgte zunächst eine Zerlegung gemäß der Methode von Passarino und Veltman. Da die somit erhaltenen Ergebnisse jedoch i.a. noch nicht das chirale Zählschema erfüllen, wurde die entsprechende Renormierung mittels der reformulierten Infrarotregularisierung vorgenommen. Zu diesem Zweck wurde ein Verfahren entwickelt, welches die Abzugsterme automatisiert bestimmt. Die schließlich erhaltenen Isospinamplituden wurden in das Programm MAID eingebaut. In diesem Programm wurden als Test (Ergebnisse bis Ordnung drei) die s-Wellenmultipole E_{0+} und L_{0+} in der Schwellenregion berechnet. Die Ergebnisse wurden sowohl mit Messdaten als auch mit den Resultaten des "klassischen" MAID verglichen, wobei sich i. a. gute Übereinstimmungen im Rahmen der Fehler ergaben.
Resumo:
The Factorization Method localizes inclusions inside a body from measurements on its surface. Without a priori knowing the physical parameters inside the inclusions, the points belonging to them can be characterized using the range of an auxiliary operator. The method relies on a range characterization that relates the range of the auxiliary operator to the measurements and is only known for very particular applications. In this work we develop a general framework for the method by considering symmetric and coercive operators between abstract Hilbert spaces. We show that the important range characterization holds if the difference between the inclusions and the background medium satisfies a coerciveness condition which can immediately be translated into a condition on the coefficients of a given real elliptic problem. We demonstrate how several known applications of the Factorization Method are covered by our general results and deduce the range characterization for a new example in linear elasticity.
Resumo:
In various imaging problems the task is to use the Cauchy data of the solutions to an elliptic boundary value problem to reconstruct the coefficients of the corresponding partial differential equation. Often the examined object has known background properties but is contaminated by inhomogeneities that cause perturbations of the coefficient functions. The factorization method of Kirsch provides a tool for locating such inclusions. In this paper, the factorization technique is studied in the framework of coercive elliptic partial differential equations of the divergence type: Earlier it has been demonstrated that the factorization algorithm can reconstruct the support of a strictly positive (or negative) definite perturbation of the leading order coefficient, or if that remains unperturbed, the support of a strictly positive (or negative) perturbation of the zeroth order coefficient. In this work we show that these two types of inhomogeneities can, in fact, be located simultaneously. Unlike in the earlier articles on the factorization method, our inclusions may have disconnected complements and we also weaken some other a priori assumptions of the method. Our theoretical findings are complemented by two-dimensional numerical experiments that are presented in the framework of the diffusion approximation of optical tomography.
Resumo:
English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.
Resumo:
The seismic behaviour of one-storey asymmetric structures has been studied since 1970s by a number of researches studies which identified the coupled nature of the translational-to-torsional response of those class of systems leading to severe displacement magnifications at the perimeter frames and therefore to significant increase of local peak seismic demand to the structural elements with respect to those of equivalent not-eccentric systems (Kan and Chopra 1987). These studies identified the fundamental parameters (such as the fundamental period TL normalized eccentricity e and the torsional-to-lateral frequency ratio Ωϑ) governing the torsional behavior of in-plan asymmetric structures and trends of behavior. It has been clearly recognized that asymmetric structures characterized by Ωϑ >1, referred to as torsionally-stiff systems, behave quite different form structures with Ωϑ <1, referred to as torsionally-flexible systems. Previous research works by some of the authors proposed a simple closed-form estimation of the maximum torsional response of one-storey elastic systems (Trombetti et al. 2005 and Palermo et al. 2010) leading to the so called “Alpha-method” for the evaluation of the displacement magnification factors at the corner sides. The present paper provides an upgrade of the “Alpha Method” removing the assumption of linear elastic response of the system. The main objective is to evaluate how the excursion of the structural elements in the inelastic field (due to the reaching of yield strength) affects the displacement demand of one-storey in-plan asymmetric structures. The system proposed by Chopra and Goel in 2007, which is claimed to be able to capture the main features of the non-linear response of in-plan asymmetric system, is used to perform a large parametric analysis varying all the fundamental parameters of the system, including the inelastic demand by varying the force reduction factor from 2 to 5. Magnification factors for different force reduction factor are proposed and comparisons with the results obtained from linear analysis are provided.
Resumo:
The purpose of this thesis is the atomic-scale simulation of the crystal-chemical and physical (phonon, energetic) properties of some strategically important minerals for structural ceramics, biomedical and petrological applications. These properties affect the thermodynamic stability and rule the mineral-environment interface phenomena, with important economical, (bio)technological, petrological and environmental implications. The minerals of interest belong to the family of phyllosilicates (talc, pyrophyllite and muscovite) and apatite (OHAp), chosen for their importance in industrial and biomedical applications (structural ceramics) and petrophysics. In this thesis work we have applicated quantum mechanics methods, formulas and knowledge to the resolution of mineralogical problems ("Quantum Mineralogy”). The chosen theoretical approach is the Density Functional Theory (DFT), along with periodic boundary conditions to limit the portion of the mineral in analysis to the crystallographic cell and the hybrid functional B3LYP. The crystalline orbitals were simulated by linear combination of Gaussian functions (GTO). The dispersive forces, which are important for the structural determination of phyllosilicates and not properly con-sidered in pure DFT method, have been included by means of a semi-empirical correction. The phonon and the mechanical properties were also calculated. The equation of state, both in athermal conditions and in a wide temperature range, has been obtained by means of variations in the volume of the cell and quasi-harmonic approximation. Some thermo-chemical properties of the minerals (isochoric and isobaric thermal capacity) were calculated, because of their considerable applicative importance. For the first time three-dimensional charts related to these properties at different pressures and temperatures were provided. The hydroxylapatite has been studied from the standpoint of structural and phonon properties for its biotechnological role. In fact, biological apatite represents the inorganic phase of vertebrate hard tissues. Numerous carbonated (hydroxyl)apatite structures were modelled by QM to cover the broadest spectrum of possible biological structural variations to fulfil bioceramics applications.
Resumo:
In this thesis we develop further the functional renormalization group (RG) approach to quantum field theory (QFT) based on the effective average action (EAA) and on the exact flow equation that it satisfies. The EAA is a generalization of the standard effective action that interpolates smoothly between the bare action for krightarrowinfty and the standard effective action rnfor krightarrow0. In this way, the problem of performing the functional integral is converted into the problem of integrating the exact flow of the EAA from the UV to the IR. The EAA formalism deals naturally with several different aspects of a QFT. One aspect is related to the discovery of non-Gaussian fixed points of the RG flow that can be used to construct continuum limits. In particular, the EAA framework is a useful setting to search for Asymptotically Safe theories, i.e. theories valid up to arbitrarily high energies. A second aspect in which the EAA reveals its usefulness are non-perturbative calculations. In fact, the exact flow that it satisfies is a valuable starting point for devising new approximation schemes. In the first part of this thesis we review and extend the formalism, in particular we derive the exact RG flow equation for the EAA and the related hierarchy of coupled flow equations for the proper-vertices. We show how standard perturbation theory emerges as a particular way to iteratively solve the flow equation, if the starting point is the bare action. Next, we explore both technical and conceptual issues by means of three different applications of the formalism, to QED, to general non-linear sigma models (NLsigmaM) and to matter fields on curved spacetimes. In the main part of this thesis we construct the EAA for non-abelian gauge theories and for quantum Einstein gravity (QEG), using the background field method to implement the coarse-graining procedure in a gauge invariant way. We propose a new truncation scheme where the EAA is expanded in powers of the curvature or field strength. Crucial to the practical use of this expansion is the development of new techniques to manage functional traces such as the algorithm proposed in this thesis. This allows to project the flow of all terms in the EAA which are analytic in the fields. As an application we show how the low energy effective action for quantum gravity emerges as the result of integrating the RG flow. In any treatment of theories with local symmetries that introduces a reference scale, the question of preserving gauge invariance along the flow emerges as predominant. In the EAA framework this problem is dealt with the use of the background field formalism. This comes at the cost of enlarging the theory space where the EAA lives to the space of functionals of both fluctuation and background fields. In this thesis, we study how the identities dictated by the symmetries are modified by the introduction of the cutoff and we study so called bimetric truncations of the EAA that contain both fluctuation and background couplings. In particular, we confirm the existence of a non-Gaussian fixed point for QEG, that is at the heart of the Asymptotic Safety scenario in quantum gravity; in the enlarged bimetric theory space where the running of the cosmological constant and of Newton's constant is influenced by fluctuation couplings.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.