971 resultados para FORMALISM
Resumo:
This paper describes a logic-based formalism for qualitative spatial reasoning with cast shadows (Perceptual Qualitative Relations on Shadows, or PQRS) and presents results of a mobile robot qualitative self-localisation experiment using this formalism. Shadow detection was accomplished by mapping the images from the robot’s monocular colour camera into a HSV colour space and then thresholding on the V dimension. We present results of selflocalisation using two methods for obtaining the threshold automatically: in one method the images are segmented according to their grey-scale histograms, in the other, the threshold is set according to a prediction about the robot’s location, based upon a qualitative spatial reasoning theory about shadows. This theory-driven threshold search and the qualitative self-localisation procedure are the main contributions of the present research. To the best of our knowledge this is the first work that uses qualitative spatial representations both to perform robot self-localisation and to calibrate a robot’s interpretation of its perceptual input.
Resumo:
We have completed a high-contrast direct imaging survey for giant planets around 57 debris disk stars as part of the Gemini NICI Planet-Finding Campaign. We achieved median H-band contrasts of 12.4 mag at 0.''5 and 14.1 mag at 1'' separation. Follow-up observations of the 66 candidates with projected separation <500 AU show that all of them are background objects. To establish statistical constraints on the underlying giant planet population based on our imaging data, we have developed a new Bayesian formalism that incorporates (1) non-detections, (2) single-epoch candidates, (3) astrometric and (4) photometric information, and (5) the possibility of multiple planets per star to constrain the planet population. Our formalism allows us to include in our analysis the previously known β Pictoris and the HR 8799 planets. Our results show at 95% confidence that <13% of debris disk stars have a ≥5 M Jup planet beyond 80 AU, and <21% of debris disk stars have a ≥3 M Jup planet outside of 40 AU, based on hot-start evolutionary models. We model the population of directly imaged planets as d 2 N/dMdavpropm α a β, where m is planet mass and a is orbital semi-major axis (with a maximum value of a max). We find that β < –0.8 and/or α > 1.7. Likewise, we find that β < –0.8 and/or a max < 200 AU. For the case where the planet frequency rises sharply with mass (α > 1.7), this occurs because all the planets detected to date have masses above 5 M Jup, but planets of lower mass could easily have been detected by our search. If we ignore the β Pic and HR 8799 planets (should they belong to a rare and distinct group), we find that <20% of debris disk stars have a ≥3 M Jup planet beyond 10 AU, and β < –0.8 and/or α < –1.5. Likewise, β < –0.8 and/or a max < 125 AU. Our Bayesian constraints are not strong enough to reveal any dependence of the planet frequency on stellar host mass. Studies of transition disks have suggested that about 20% of stars are undergoing planet formation; our non-detections at large separations show that planets with orbital separation >40 AU and planet masses >3 M Jup do not carve the central holes in these disks.
Resumo:
Spin systems in the presence of disorder are described by two sets of degrees of freedom, associated with orientational (spin) and disorder variables, which may be characterized by two distinct relaxation times. Disordered spin models have been mostly investigated in the quenched regime, which is the usual situation in solid state physics, and in which the relaxation time of the disorder variables is much larger than the typical measurement times. In this quenched regime, disorder variables are fixed, and only the orientational variables are duly thermalized. Recent studies in the context of lattice statistical models for the phase diagrams of nematic liquid-crystalline systems have stimulated the interest of going beyond the quenched regime. The phase diagrams predicted by these calculations for a simple Maier-Saupe model turn out to be qualitative different from the quenched case if the two sets of degrees of freedom are allowed to reach thermal equilibrium during the experimental time, which is known as the fully annealed regime. In this work, we develop a transfer matrix formalism to investigate annealed disordered Ising models on two hierarchical structures, the diamond hierarchical lattice (DHL) and the Apollonian network (AN). The calculations follow the same steps used for the analysis of simple uniform systems, which amounts to deriving proper recurrence maps for the thermodynamic and magnetic variables in terms of the generations of the construction of the hierarchical structures. In this context, we may consider different kinds of disorder, and different types of ferromagnetic and anti-ferromagnetic interactions. In the present work, we analyze the effects of dilution, which are produced by the removal of some magnetic ions. The system is treated in a “grand canonical" ensemble. The introduction of two extra fields, related to the concentration of two different types of particles, leads to higher-rank transfer matrices as compared with the formalism for the usual uniform models. Preliminary calculations on a DHL indicate that there is a phase transition for a wide range of dilution concentrations. Ising spin systems on the AN are known to be ferromagnetically ordered at all temperatures; in the presence of dilution, however, there are indications of a disordered (paramagnetic) phase at low concentrations of magnetic ions.
Resumo:
In molecular and atomic devices the interaction between electrons and ionic vibrations has an important role in electronic transport. The electron-phonon coupling can cause the loss of the electron's phase coherence, the opening of new conductance channels and the suppression of purely elastic ones. From the technological viewpoint phonons might restrict the efficiency of electronic devices by energy dissipation, causing heating, power loss and instability. The state of the art in electron transport calculations consists in combining ab initio calculations via Density Functional Theory (DFT) with Non-Equilibrium Green's Function formalism (NEGF). In order to include electron-phonon interactions, one needs in principle to include a self-energy scattering term in the open system Hamiltonian which takes into account the effect of the phonons over the electrons and vice versa. Nevertheless this term could be obtained approximately by perturbative methods. In the First Born Approximation one considers only the first order terms of the electronic Green's function expansion. In the Self-Consistent Born Approximation, the interaction self-energy is calculated with the perturbed electronic Green's function in a self-consistent way. In this work we describe how to incorporate the electron-phonon interaction to the SMEAGOL program (Spin and Molecular Electronics in Atomically Generated Orbital Landscapes), an ab initio code for electronic transport based on the combination of DFT + NEGF. This provides a tool for calculating the transport properties of materials' specific system, particularly in molecular electronics. Preliminary results will be presented, showing the effects produced by considering the electron-phonon interaction in nanoscale devices.
Resumo:
Atomic physics plays an important role in determining the evolution stages in a wide range of laboratory and cosmic plasmas. Therefore, the main contribution to our ability to model, infer and control plasma sources is the knowledge of underlying atomic processes. Of particular importance are reliable low temperature dielectronic recombination (DR) rate coefficients. This thesis provides systematically calculated DR rate coefficients of lithium-like beryllium and sodium ions via ∆n = 0 doubly excited resonant states. The calculations are based on complex-scaled relativistic many-body perturbation theory in an all-order formulation within the single- and double-excitation coupled-cluster scheme, including radiative corrections. Comparison of DR resonance parameters (energy levels, autoionization widths, radiative transition probabilities and strengths) between our theoretical predictions and the heavy-ion storage rings experiments (CRYRING-Stockholm and TSRHeidelberg) shows good agreement. The intruder state problem is a principal obstacle for general application of the coupled-cluster formalism on doubly excited states. Thus, we have developed a technique designed to avoid the intruder state problem. It is based on a convenient partitioning of the Hilbert space and reformulation of the conventional set of pairequations. The general aspects of this development are discussed, and the effectiveness of its numerical implementation (within the non-relativistic framework) is selectively illustrated on autoionizing doubly excited states of helium.
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique.
We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy.
We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours.
Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m
Resumo:
Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.
Resumo:
In recent years, new precision experiments have become possible withthe high luminosity accelerator facilities at MAMIand JLab, supplyingphysicists with precision data sets for different hadronic reactions inthe intermediate energy region, such as pion photo- andelectroproduction and real and virtual Compton scattering.By means of the low energy theorem (LET), the global properties of thenucleon (its mass, charge, and magnetic moment) can be separated fromthe effects of the internal structure of the nucleon, which areeffectively described by polarizabilities. Thepolarizabilities quantify the deformation of the charge andmagnetization densities inside the nucleon in an applied quasistaticelectromagnetic field. The present work is dedicated to develop atool for theextraction of the polarizabilities from these precise Compton data withminimum model dependence, making use of the detailed knowledge of pionphotoproduction by means of dispersion relations (DR). Due to thepresence of t-channel poles, the dispersion integrals for two ofthe six Compton amplitudes diverge. Therefore, we have suggested to subtract the s-channel dispersion integrals at zero photon energy($nu=0$). The subtraction functions at $nu=0$ are calculated through DRin the momentum transfer t at fixed $nu=0$, subtracted at t=0. For this calculation, we use the information about the t-channel process, $gammagammatopipito Nbar{N}$. In this way, four of thepolarizabilities can be predicted using the unsubtracted DR in the $s$-channel. The other two, $alpha-beta$ and $gamma_pi$, are free parameters in ourformalism and can be obtained from a fit to the Compton data.We present the results for unpolarized and polarized RCS observables,%in the kinematics of the most recent experiments, and indicate anenhanced sensitivity to the nucleon polarizabilities in theenergy range between pion production threshold and the $Delta(1232)$-resonance.newlineindentFurthermore,we extend the DR formalism to virtual Compton scattering (radiativeelectron scattering off the nucleon), in which the concept of thepolarizabilities is generalized to the case of avirtual initial photon by introducing six generalizedpolarizabilities (GPs). Our formalism provides predictions for the fourspin GPs, while the two scalar GPs $alpha(Q^2)$ and $beta(Q^2)$ have to befitted to the experimental data at each value of $Q^2$.We show that at energies betweenpion threshold and the $Delta(1232)$-resonance position, thesensitivity to the GPs can be increased significantly, as compared tolow energies, where the LEX is applicable. Our DR formalism can be used for analysing VCS experiments over a widerange of energy and virtuality $Q^2$, which allows one to extract theGPs from VCS data in different kinematics with a minimum of model dependence.
Resumo:
Piezoelectrics present an interactive electromechanical behaviour that, especially in recent years, has generated much interest since it renders these materials adapt for use in a variety of electronic and industrial applications like sensors, actuators, transducers, smart structures. Both mechanical and electric loads are generally applied on these devices and can cause high concentrations of stress, particularly in proximity of defects or inhomogeneities, such as flaws, cavities or included particles. A thorough understanding of their fracture behaviour is crucial in order to improve their performances and avoid unexpected failures. Therefore, a considerable number of research works have addressed this topic in the last decades. Most of the theoretical studies on this subject find their analytical background in the complex variable formulation of plane anisotropic elasticity. This theoretical approach bases its main origins in the pioneering works of Muskelishvili and Lekhnitskii who obtained the solution of the elastic problem in terms of independent analytic functions of complex variables. In the present work, the expressions of stresses and elastic and electric displacements are obtained as functions of complex potentials through an analytical formulation which is the application to the piezoelectric static case of an approach introduced for orthotropic materials to solve elastodynamics problems. This method can be considered an alternative to other formalisms currently used, like the Stroh’s formalism. The equilibrium equations are reduced to a first order system involving a six-dimensional vector field. After that, a similarity transformation is induced to reach three independent Cauchy-Riemann systems, so justifying the introduction of the complex variable notation. Closed form expressions of near tip stress and displacement fields are therefore obtained. In the theoretical study of cracked piezoelectric bodies, the issue of assigning consistent electric boundary conditions on the crack faces is of central importance and has been addressed by many researchers. Three different boundary conditions are commonly accepted in literature: the permeable, the impermeable and the semipermeable (“exact”) crack model. This thesis takes into considerations all the three models, comparing the results obtained and analysing the effects of the boundary condition choice on the solution. The influence of load biaxiality and of the application of a remote electric field has been studied, pointing out that both can affect to a various extent the stress fields and the angle of initial crack extension, especially when non-singular terms are retained in the expressions of the electro-elastic solution. Furthermore, two different fracture criteria are applied to the piezoelectric case, and their outcomes are compared and discussed. The work is organized as follows: Chapter 1 briefly introduces the fundamental concepts of Fracture Mechanics. Chapter 2 describes plane elasticity formalisms for an anisotropic continuum (Eshelby-Read-Shockley and Stroh) and introduces for the simplified orthotropic case the alternative formalism we want to propose. Chapter 3 outlines the Linear Theory of Piezoelectricity, its basic relations and electro-elastic equations. Chapter 4 introduces the proposed method for obtaining the expressions of stresses and elastic and electric displacements, given as functions of complex potentials. The solution is obtained in close form and non-singular terms are retained as well. Chapter 5 presents several numerical applications aimed at estimating the effect of load biaxiality, electric field, considered permittivity of the crack. Through the application of fracture criteria the influence of the above listed conditions on the response of the system and in particular on the direction of crack branching is thoroughly discussed.
Resumo:
Im Rahmen dieser Arbeit wurde die mehrstufige Resonanzionisation zur Spektroskopie im Gadolinium und Samarium eingesetzt und am Gadolinium für analytische Untersuchungen weiterentwickelt. Der Einsatzbereich der RIMS mit kontinuierlichen und gepulsten Lasern an komplexen Atomen wurde damit deutlich erweitert. Samarium und Gadolinium gehören zur Gruppe der Lanthanide, aufgrund der komplizierten Elektronenkonfigurationen zeichnen sie sich durch ein interessantes atomares Spektrum aus. Im Samarium wurde der erste von maximal drei resonanten Übergängen bezüglich Isotopieverschiebung und Hyperfein-strukturaufspaltung untersucht, knapp unterhalb des ersten Ionisationslimits nach möglichst ungestörten Rydbergserien gesucht und aus der Konvergenz dieser Serien das Ionisationspotenzial für 154Sm isotopenselektiv zu IP = 45519.30793(43) cm-1 bestimmt. Samarium und Gadolinium besitzen eine komplexe Kontinuumsstruktur, die sich durch schmale und starke autoionisierende Resonanzen auszeichnet. Daten früherer Untersuchungen zur Gadoliniumkontinuumsstruktur wurden in dieser Arbeit systematisch ausgewertet und durch eigene Messungen ergänzt. Zur theoretischen Beschreibung der Linienprofile interferierender autoionisierender Zustände wurde neben Fanoprofilen auch auf einen Ansatz aus der Kernphysik zurückgegriffen, den K-Matrix-Formalismus, und ein entsprechendes Simulationsprogramm eingesetzt. Anwendung auf ausgewählte spektrale Bereiche im Samarium und Gadolinium zeigt gute Reproduktion der Linienformen. Im Rahmen dieser Arbeit wurde darüber hinaus die Einsetzbarkeit von gepulsten Lasern für die Spurenanalyse untersucht und die Erreichbarkeit der notwendigen Spezifikationen für medizinische Fragestellungen demonstriert.
Resumo:
Synthetic biology is a young field of applicative research aiming to design and build up artificial biological devices, useful for human applications. How synthetic biology emerged in past years and how the development of the Registry of Standard Biological Parts aimed to introduce one practical starting solution to apply the basics of engineering to molecular biology is presented in chapter 1 in the thesis The same chapter recalls how biological parts can make up a genetic program, the molecular cloning tecnique useful for this purpose, and an overview of the mathematical modeling adopted to describe gene circuit behavior. Although the design of gene circuits has become feasible the increasing complexity of gene networks asks for a rational approach to design gene circuits. A bottom-up approach was proposed, suggesting that the behavior of a complicated system can be predicted from the features of its parts. The option to use modular parts in large-scale networks will be facilitated by a detailed and shared characterization of their functional properties. Such a prediction, requires well-characterized mathematical models of the parts and of how they behave when assembled together. In chapter 2, the feasibility of the bottom-up approach in the design of a synthetic program in Escherichia coli bacterial cells is described. The rational design of gene networks is however far from being established. The synthetic biology approach can used the mathematical formalism to identify biological information not assessable with experimental measurements. In this context, chapter 3 describes the design of a synthetic sensor for identifying molecules of interest inside eukaryotic cells. The Registry of Standard parts collects standard and modular biological parts. To spread the use of BioBricks the iGEM competition was started. The ICM Laboratory, where Francesca Ceroni completed her Ph.D, partecipated with teams of students and Chapter 4 summarizes the projects developed.
Resumo:
In dieser Arbeit aus dem Bereich der Wenig-Nukleonen-Physik wird die neu entwickelte Methode der Lorentz Integral Transformation (LIT) auf die Untersuchung von Kernphotoabsorption und Elektronenstreuung an leichten Kernen angewendet. Die LIT-Methode ermoeglicht exakte Rechnungen durchzufuehren, ohne explizite Bestimmung der Endzustaende im Kontinuum. Das Problem wird auf die Loesung einer bindungzustandsaehnlichen Gleichung reduziert, bei der die Endzustandswechselwirkung vollstaendig beruecksichtigt wird. Die Loesung der LIT-Gleichung wird mit Hilfe einer Entwicklung nach hypersphaerischen harmonischen Funktionen durchgefuehrt, deren Konvergenz durch Anwendung einer effektiven Wechselwirkung im Rahmem des hypersphaerischen Formalismus (EIHH) beschleunigt wird. In dieser Arbeit wird die erste mikroskopische Berechnung des totalen Wirkungsquerschnittes fuer Photoabsorption unterhalb der Pionproduktionsschwelle an 6Li, 6He und 7Li vorgestellt. Die Rechnungen werden mit zentralen semirealistischen NN-Wechselwirkungen durchgefuehrt, die die Tensor Kraft teilweise simulieren, da die Bindungsenergien von Deuteron und von Drei-Teilchen-Kernen richtig reproduziert werden. Der Wirkungsquerschnitt fur Photoabsorption an 6Li zeigt nur eine Dipol-Riesenresonanz, waehrend 6He zwei unterschiedliche Piks aufweist, die dem Aufbruch vom Halo und vom Alpha-Core entsprechen. Der Vergleich mit experimentellen Daten zeigt, dass die Addition einer P-Wellen-Wechselwirkung die Uebereinstimmung wesentlich verbessert. Bei 7Li wird nur eine Dipol-Riesenresonanz gefunden, die gut mit den verfuegbaren experimentellen Daten uebereinstimmt. Bezueglich der Elektronenstreuung wird die Berechnung der longitudinalen und transversalen Antwortfunktionen von 4He im quasi-elastischen Bereich fuer mittlere Werte des Impulsuebertrages dargestellt. Fuer die Ladungs- und Stromoperatoren wird ein nichtrelativistisches Modell verwendet. Die Rechnungen sind mit semirealistischen Wechselwirkungen durchgefuert und ein eichinvarianter Strom wird durch die Einfuehrung eines Mesonaustauschstroms gewonnen. Die Wirkung des Zweiteilchenstroms auf die transversalen Antwortfunktionen wird untersucht. Vorlaeufige Ergebnisse werden gezeigt und mit den verfuegbaren experimentellen Daten verglichen.
Resumo:
Preferences are present in many real life situations but it is often difficult to quantify them giving a precise value. Sometimes preference values may be missing because of privacy reasons or because they are expensive to obtain or to produce. In some other situations the user of an automated system may have a vague idea of whats he wants. In this thesis we considered the general formalism of soft constraints, where preferences play a crucial role and we extended such a framework to handle both incomplete and imprecise preferences. In particular we provided new theoretical frameworks to handle such kinds of preferences. By admitting missing or imprecise preferences, solving a soft constraint problem becomes a different task. In fact, the new goal is to find solutions which are the best ones independently of the precise value the each preference may have. With this in mind we defined two notions of optimality: the possibly optimal solutions and the necessary optimal solutions, which are optimal no matter we assign a precise value to a missing or imprecise preference. We provided several algorithms, bases on both systematic and local search approaches, to find such kind of solutions. Moreover, we also studied the impact of our techniques also in a specific class of problems (the stable marriage problems) where imprecision and incompleteness have a specific meaning and up to now have been tackled with different techniques. In the context of the classical stable marriage problem we developed a fair method to randomly generate stable marriages of a given problem instance. Furthermore, we adapted our techniques to solve stable marriage problems with ties and incomplete lists, which are known to be NP-hard, obtaining good results both in terms of size of the returned marriage and in terms of steps need to find a solution.
Resumo:
Diese Arbeit befasst sich mit Eduard Study (1862-1930), einem der deutschen Geometer um die Jahrhundertwende, der seine Zeit zum Einen durch seine Kontakte zu Klein, Hilbert, Engel, Lie, Gordan, Halphen, Zeuthen, Einstein, Hausdorff und Weyl geprägt hat, zum Anderen in ihr aber auch für seine beißenden und stilistisch ausgefeilten Kritiken ebenso berühmt wie berüchtigt war. Da sich Study mit einer Vielzahl mathematischer Themen beschäftigt hat, führen wir zunächst in die von ihm bearbeiteten Gebiete der Geometrie des 19. Jahrhunderts ein (analytische und synthetische Geometrie im Sinne von Monge, Poncelet, Plücker und Reye, Invariantentheorie Clebsch-Gordan'scher Prägung, abzählende Geometrie von Chasles und Halphen, die Werke Lie's und Grassmann’s, Liniengeometrie sowie Axiomatik und Grundlagenkrise). In seiner darauf folgenden Biographie finden sich als zentrale Stellen seine Habilitation bei Klein über die Chasles’sche Vermutung, sein Streit mit Zeuthen darüber als eine der Debatten der Mathematischen Annalen (aus der er historisch zwar nicht, mathematisch aber tatsächlich als Gewinner hätte herausgehen müssen, wie wir an der Lösung des Problems durch van der Waerden sehen werden) und seine Auseinandersetzungen als etablierter Bonner Professor mit Engel über Lie, Weyl über Invariantentheorie, zahlreichen philosophischen Richtungen über das Raumproblem, Pasch’s Axiomatik, Hilbert’s Formalismus sowie Brouwer’s und Weyl’s Intuitionismus.