935 resultados para Constrained Local Models, Non-rigid Face Alignment, Active Appearance Models
Resumo:
Das Studium der Auflösungs- und Wachstumsprozesse an Feststoff-Flüssigkeits-Grenzflächen unter nicht-hydrostatischen Beanspruchungen ist wesentlich für das Verständnis von Defor-mationsprozessen, die in der Erde ablaufen. Unter diesen genannten Prozessen gehört die Drucklösung zu den wichtigsten duktilen Deformationsprozessen, von der Diagenese bishin zur niedrig- bis mittelgradigen metamorphen Bedingungen. Bisher ist allerdings wenig darüber bekannt, welche mechanischen, physikalischen oder chemischen Potentialenergie-Gradienten die Drucklösung steuern. I.a. wird angenommen, daß die Drucklösung durch Un-terschiede kristallplastischer Verformungsenergien oder aber durch Unterschiede der Normal-beanspruchung an Korngrenzen gesteuert wird. Unterschiede der elastischen Verformungs-energien werden dabei allerdings als zu gering erachtet, um einen signifikanten Beitrag zu leisten. Aus diesem Grund werden sie als mögliche treibende Kräfte für die Drucklösung vernachlässigt. Andererseits haben neue experimentelle und theoretische Untersuchungen gezeigt, daß die elastische Verformung in der Tat einen starken Einfluß auf Lösungs- und Wachstumsmechanismen von Kristallen in einer Lösung haben kann. Da die in der Erdkruste vorherrschenden Deformationsmechanismen überwiegend im elastischen Verformungsbereich der Gesteine ablaufen, ist es sehr wichtig, das Verständnis für die Effekte, die die elastische Verformung verursacht, zu erweitern, und ihre Rolle während der Deformation durch Drucklösung zu definieren. Die vorliegende Arbeit beschäftigt sich mit Experimenten, bei denen der Effekt der mechanisch kompressiven Beanspruchung auf Lösungs- und Wachstumsprozesse von Einzelkristallen unterschiedlicher, sehr gut löslicher, elastisch/spröder Salze untersucht wurde. Diese Salze wurden als Analoga gesteinsbildender Minerale wie Quarz und Calcit ausgewählt. Der Einfluß von Stress auf die Ausbildung der Oberflächenmikrostrukturen in einer untersättigten Lösung wurde an Kaliumalaun untersucht.Lösungsrillen (20 40 µm breit, 10 40 µm tief und 20 80 µm Abstand) entwickelten sich in den Bereichen, in denen die Beanspruchung im Kristall am größten war. Sie verschwanden wieder, sobald der Kristall entlastet wurde. Diese Rillen entwickelten sich parallel zu niedrig indizierten kristallographischen Richtungen und sub-perpendikular zu den Trajektorien, die der maximalen, lokalen kompressiven Beanspruchung entsprachen. Die Größe der Lösungsrillen hing von der lokalen Oberflächenbeanspruchung, der Oberflächenenergie und dem Untersättigungsgrad der wässrigen Lösung ab. Die mikrostrukturelle Entwicklung der Kristalloberflächen stimmte gut mit den theoretischen Vorhersagen überein, die auf den Modellen von Heidug & Leroy (1994) und Leroy & Heidug (1994) basieren. Der Einfluß der Beanspruchung auf die Auflösungsrate wurde an Natriumchlorat-Einzelkristallen untersucht. Dabei wurde herausgefunden, daß sich gestresste Kristalle schneller lösen als Kristalle, auf die keine Beanspruchung einwirkt. Der experimentell beobachtete Anstieg der Auflösungsrate der gestressten Kristalle war ein bis zwei Größenordnungen höher als theoretisch erwartet. Die Auflösungsrate stieg linear mit dem Stress an, und der Anstieg war um so größer, je stärker die Lösung untersättigt war. Außerdem wurde der Effekt der Bean-spruchung auf das Kristallwachstum an Kaliumalaun- und Kaliumdihydrogenphosphat-Ein-zelkristallen untersucht. Die Wachstumsrate der Flächen {100} und {110} von Kalium-alaun war bei Beanspruchung stark reduziert. Für all diese Ergebnisse spielte die Oberflächenrauhigkeit der Kristalle eine Schlüsselrolle, indem sie eine nicht-homogene Stressverteilung auf der Kristalloberfläche verursachte. Die Resultate zeigen, daß die elastische Verformung eine signifikante Rolle während der Drucklösung spielen kann, und eine signifikante Deformation in der oberen Kruste verursachen kann, bei Beanspruchungen, die geringer sind, als gemeinhin angenommen wird. Somit folgt, daß die elastische Bean-spruchung berücksichtigt werden muß, wenn mikrophysikalische Deformationsmodelle entwickelt werden sollen.
Resumo:
Diese Doktorarbeit untersucht das Verhalten von komplexenFluidenunter Scherung, insbesondere den Einfluss von Scherflüssenauf dieStrukturbildung.Dazu wird ein Modell dieser entworfen, welches imRahmen von Molekulardynamiksimulationen verwendet wird.Zunächst werden Gleichgewichtseigenschaften dieses Modellsuntersucht.Hierbei wird unter anderem die Lage desOrdnungs--Unordnungsübergangs von derisotropen zur lamellaren Phase der Dimere bestimmt.Der Einfluss von Scherflüssen auf diese lamellare Phase wirdnununtersucht und mit analytischen Theorien verglichen. Die Scherung einer parallelen lamellaren Phase ruft eineNeuausrichtung des Direktors in Flussrichtung hervor.Das verursacht eine Verminderung der Schichtdicke mitsteigender Scherrateund führt oberhalb eines Schwellwertes zu Ondulationen.Ein vergleichbares Verhalten wird auch in lamellarenSystemengefunden, an denen in Richtung des Direktors gezogen wird.Allerdings wird festgestellt, dass die Art der Bifurkationenin beidenFällen unterschiedlich ist.Unter Scherung wird ein Übergang von Lamellen parallelerAusrichtung zu senkrechter gefunden.Dabei wird beoachtet, dass die Scherspannung in senkrechterOrientierungniedriger als in der parallelen ist.Dies führt unter bestimmten Bedingungen zum Auftreten vonScherbändern, was auch in Simulationen beobachtet wird. Es ist gelungen mit einem einfachen Modell viele Apsekte desVerhalten vonkomplexen Fluiden wiederzugeben. Die Strukturbildung hängt offensichtlich nurbedingt von lokalen Eigenschaften der Moleküle ab.
Resumo:
The aim of this thesis was to investigate the synthesis of enantiomerically enriched heterocycles and dehydro-β-amino acid derivatives which can be used as scaffolds or intermediates of biologically active compounds, in particular as novel αvβ3 and α5β1 integrin ligands. The starting materials of all the compounds here synthesized are alkylideneacetoacetates. Alkylidene derivates are very usefull compounds, they are usually used as unsaturated electrophiles and they have the advantage of introducing different kind of functionality that may be further elaborated. In chapter 1, regio- and stereoselective allylic amination of pure carbonates is presented. The reaction proceeds via uncatalyzed or palladium-catalyzed conditions and affords enantiopure dehydro-β-amino esters that are useful precursor of biologically active compounds. Chapter 2 illustrates the synthesis of substituted isoxazolidines and isoxazolines via Michael addition followed by intramolecular hemiketalisation. The investigation on the effect of the Lewis acid catalysis on the regioselectivity of the addition it also reported. Isoxazolidines and isoxazolines are interesting heterocyclic compounds that may be regarded as unusual constrained -amino acids or as furanose mimetics. The synthesis of unusual cyclic amino acids precursors, that may be envisaged as proline analogues, as scaffolds for the design of bioactive peptidomimetics is presented in chapter 3. The synthesis of 2-substituted-3,4-dehydropyrrole derivatives starting from allylic carbonates via a two step allylic amination/ring closing metathesis (RCM) protocol is carried out. The reaction was optimized by testing different Grubbs’ catalysts and carbamate nitrogen protecting groups. Moreover, in view of a future application of these dehydro-β-amino acids as central core of peptidomimetics , the malonate chain was also used to protect nitrogen prior to RCM. Finally, chapter 4 presents the synthesis of two novel different classes of integrin antagonists, one derived from dehydro-β-amino acid prepared as described in chapter 1 and the other one has isoxazolidines synthesized in chapter 2 as rigid constrained core. Since that these compounds are promising RGD mimetics for αvβ3 and α5β1 integrins, they have been submitted to biological assay. and to interpret on a molecular basis their different affinities for the αvβ3 receptor, docking studies were performed using Glide program.
Resumo:
In this work we are concerned with the analysis and numerical solution of Black-Scholes type equations arising in the modeling of incomplete financial markets and an inverse problem of determining the local volatility function in a generalized Black-Scholes model from observed option prices. In the first chapter a fully nonlinear Black-Scholes equation which models transaction costs arising in option pricing is discretized by a new high order compact scheme. The compact scheme is proved to be unconditionally stable and non-oscillatory and is very efficient compared to classical schemes. Moreover, it is shown that the finite difference solution converges locally uniformly to the unique viscosity solution of the continuous equation. In the next chapter we turn to the calibration problem of computing local volatility functions from market data in a generalized Black-Scholes setting. We follow an optimal control approach in a Lagrangian framework. We show the existence of a global solution and study first- and second-order optimality conditions. Furthermore, we propose an algorithm that is based on a globalized sequential quadratic programming method and a primal-dual active set strategy, and present numerical results. In the last chapter we consider a quasilinear parabolic equation with quadratic gradient terms, which arises in the modeling of an optimal portfolio in incomplete markets. The existence of weak solutions is shown by considering a sequence of approximate solutions. The main difficulty of the proof is to infer the strong convergence of the sequence. Furthermore, we prove the uniqueness of weak solutions under a smallness condition on the derivatives of the covariance matrices with respect to the solution, but without additional regularity assumptions on the solution. The results are illustrated by a numerical example.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.
Resumo:
Non-small-cell lung cancer (NSCLC) represents the leading cause of cancer death worldwide, and 5-year survival is about 16% for patients diagnosed with advanced lung cancer and about 70-90% when the disease is diagnosed and treated at earlier stages. Treatment of NSCLC is changed in the last years with the introduction of targeted agents, such as gefitinib and erlotinib, that have dramatically changed the natural history of NSCLC patients carrying specific mutations in the EGFR gene, or crizotinib, for patients with the EML4-ALK translocation. However, such patients represent only about 15-20% of all NSCLC patients, and for the remaining individuals conventional chemotherapy represents the standard choice yet, but response rate to thise type of treatment is only about 20%. Development of new drugs and new therapeutic approaches are so needed to improve patients outcome. In this project we aimed to analyse the antitumoral activity of two compounds with the ability to inhibit histone deacethylases (ACS 2 and ACS 33), derived from Valproic Acid and conjugated with H2S, in human cancer cell lines derived from NSCLC tissues. We showed that ACS 2 represents the more promising agent. It showed strong antitumoral and pro-apoptotic activities, by inducing membrane depolarization, cytocrome-c release and caspase 3 and 9 activation. It was able to reduce the invasive capacity of cells, through inhibition of metalloproteinases expression, and to induce a reduced chromatin condensation. This last characteristic is probably responsible for the observed high synergistic activity in combination with cisplatin. In conclusion our results highlight the potential role of the ACS 2 compound as new therapeutic option for NSCLC patients, especially in combination with cisplatin. If validated in in vivo models, this compound should be worthy for phase I clinical trials.
Resumo:
In this work a modelization of the turbulence in the atmospheric boundary layer, under convective condition, is made. For this aim, the equations that describe the atmospheric motion are expressed through Reynolds averages and, then, they need closures. This work consists in modifying the TKE-l closure used in the BOLAM (Bologna Limited Area Model) forecast model. In particular, the single column model extracted from BOLAM is used, which is modified to obtain other three different closure schemes: a non-local term is added to the flux- gradient relations used to close the second order moments present in the evolution equation of the turbulent kinetic energy, so that the flux-gradient relations become more suitable for simulating an unstable boundary layer. Furthermore, a comparison among the results obtained from the single column model, the ones obtained from the three new schemes and the observations provided by the known case in literature ”GABLS2” is made.
Resumo:
In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.
Resumo:
In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.
Towards the 3D attenuation imaging of active volcanoes: methods and tests on real and simulated data
Resumo:
The purpose of my PhD thesis has been to face the issue of retrieving a three dimensional attenuation model in volcanic areas. To this purpose, I first elaborated a robust strategy for the analysis of seismic data. This was done by performing several synthetic tests to assess the applicability of spectral ratio method to our purposes. The results of the tests allowed us to conclude that: 1) spectral ratio method gives reliable differential attenuation (dt*) measurements in smooth velocity models; 2) short signal time window has to be chosen to perform spectral analysis; 3) the frequency range over which to compute spectral ratios greatly affects dt* measurements. Furthermore, a refined approach for the application of spectral ratio method has been developed and tested. Through this procedure, the effects caused by heterogeneities of propagation medium on the seismic signals may be removed. The tested data analysis technique was applied to the real active seismic SERAPIS database. It provided a dataset of dt* measurements which was used to obtain a three dimensional attenuation model of the shallowest part of Campi Flegrei caldera. Then, a linearized, iterative, damped attenuation tomography technique has been tested and applied to the selected dataset. The tomography, with a resolution of 0.5 km in the horizontal directions and 0.25 km in the vertical direction, allowed to image important features in the off-shore part of Campi Flegrei caldera. High QP bodies are immersed in a high attenuation body (Qp=30). The latter is well correlated with low Vp and high Vp/Vs values and it is interpreted as a saturated marine and volcanic sediments layer. High Qp anomalies, instead, are interpreted as the effects either of cooled lava bodies or of a CO2 reservoir. A pseudo-circular high Qp anomaly was detected and interpreted as the buried rim of NYT caldera.
Resumo:
Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.
Resumo:
The first chapter of this work has the aim to provide a brief overview of the history of our Universe, in the context of string theory and considering inflation as its possible application to cosmological problems. We then discuss type IIB string compactifications, introducing the study of the inflaton, a scalar field candidated to describe the inflation theory. The Large Volume Scenario (LVS) is studied in the second chapter paying particular attention to the stabilisation of the Kähler moduli which are four-dimensional gravitationally coupled scalar fields which parameterise the size of the extra dimensions. Moduli stabilisation is the process through which these particles acquire a mass and can become promising inflaton candidates. The third chapter is devoted to the study of Fibre Inflation which is an interesting inflationary model derived within the context of LVS compactifications. The fourth chapter tries to extend the zone of slow-roll of the scalar potential by taking larger values of the field φ. Everything is done with the purpose of studying in detail deviations of the cosmological observables, which can better reproduce current experimental data. Finally, we present a slight modification of Fibre Inflation based on a different compactification manifold. This new model produces larger tensor modes with a spectral index in good agreement with the date released in February 2015 by the Planck satellite.