376 resultados para Collinear factorization
Resumo:
The complex nature of the nucleon-nucleon interaction and the wide range of systems covered by the roughly 3000 known nuclides leads to a multitude of effects observed in nuclear structure. Among the most prominent ones is the occurence of shell closures at so-called ”magic numbers”, which are explained by the nuclear shell model. Although the shell model already is on duty for several decades, it is still constantly extended and improved. For this process of extension, fine adjustment and verification, it is important to have experimental data of nuclear properties, especially at crucial points like in the vicinity of shell closures. This is the motivation for the work performed in this thesis: the measurement and analysis of nuclear ground state properties of the isotopic chain of 100−130Cd by collinear laser spectroscopy.rnrnThe experiment was conducted at ISOLDE/CERN using the collinear laser spectroscopy apparatus COLLAPS. This experiment is the continuation of a run on neutral atomic cadmium from A = 106 to A = 126 and extends the measured isotopes to even more exotic species. The required gain in sensitivity is mainly achieved by using a radiofrequency cooler and buncher for background reduction and by using the strong 5s 2S1/2 → 5p 2P3/2 transition in singly ionized Cd. The latter requires a continuous wave laser system with a wavelength of 214.6 nm, which has been developed during this thesis. Fourth harmonic generation of an infrared titanium sapphire laser is achieved by two subsequent cavity-enhanced second harmonic generations, leading to the production of deep-UV laser light up to about 100 mW.rnrnThe acquired data of the Z = 48 Cd isotopes, having one proton pair less than the Z = 50 shell closure at tin, covers the isotopes from N = 52 up to N = 82 and therefore almost the complete region between the neutron shell closures N = 50 and N = 82. The isotope shifts and the hyperfine structures of these isotopes have been recorded and the magnetic dipole moments, the electric quadrupole moments, spins and changes in mean square charge radii are extracted. The obtained data reveal among other features an extremely linear behaviour of the quadrupole moments of the I = 11/2− isomeric states and a parabolic development in differences in mean square nuclear charge radii between ground and isomeric state. The development of charge radii between the shell closures is smooth, exposes a regular odd-even staggering and can be described and interpreted in the model of Zamick and Thalmi.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
Die vorliegende Arbeit behandelt Vorwärts- sowie Rückwärtstheorie transienter Wirbelstromprobleme. Transiente Anregungsströme induzieren elektromagnetische Felder, welche sogenannte Wirbelströme in leitfähigen Objekten erzeugen. Im Falle von sich langsam ändernden Feldern kann diese Wechselwirkung durch die Wirbelstromgleichung, einer Approximation an die Maxwell-Gleichungen, beschrieben werden. Diese ist eine lineare partielle Differentialgleichung mit nicht-glatten Koeffizientenfunktionen von gemischt parabolisch-elliptischem Typ. Das Vorwärtsproblem besteht darin, zu gegebener Anregung sowie den umgebungsbeschreibenden Koeffizientenfunktionen das elektrische Feld als distributionelle Lösung der Gleichung zu bestimmen. Umgekehrt können die Felder mit Messspulen gemessen werden. Das Ziel des Rückwärtsproblems ist es, aus diesen Messungen Informationen über leitfähige Objekte, also über die Koeffizientenfunktion, die diese beschreibt, zu gewinnen. In dieser Arbeit wird eine variationelle Lösungstheorie vorgestellt und die Wohlgestelltheit der Gleichung diskutiert. Darauf aufbauend wird das Verhalten der Lösung für verschwindende Leitfähigkeit studiert und die Linearisierbarkeit der Gleichung ohne leitfähiges Objekt in Richtung des Auftauchens eines leitfähigen Objektes gezeigt. Zur Regularisierung der Gleichung werden Modifikationen vorgeschlagen, welche ein voll parabolisches bzw. elliptisches Problem liefern. Diese werden verifiziert, indem die Konvergenz der Lösungen gezeigt wird. Zuletzt wird gezeigt, dass unter der Annahme von sonst homogenen Umgebungsparametern leitfähige Objekte eindeutig durch die Messungen lokalisiert werden können. Hierzu werden die Linear Sampling Methode sowie die Faktorisierungsmethode angewendet.
Resumo:
Il crescente utilizzo di sistemi di analisi high-throughput per lo studio dello stato fisiologico e metabolico del corpo, ha evidenziato che una corretta alimentazione e una buona forma fisica siano fattori chiave per la salute. L'aumento dell'età media della popolazione evidenzia l'importanza delle strategie di contrasto delle patologie legate all'invecchiamento. Una dieta sana è il primo mezzo di prevenzione per molte patologie, pertanto capire come il cibo influisce sul corpo umano è di fondamentale importanza. In questo lavoro di tesi abbiamo affrontato la caratterizzazione dei sistemi di imaging radiografico Dual-energy X-ray Absorptiometry (DXA). Dopo aver stabilito una metodologia adatta per l'elaborazione di dati DXA su un gruppo di soggetti sani non obesi, la PCA ha evidenziato alcune proprietà emergenti dall'interpretazione delle componenti principali in termini delle variabili di composizione corporea restituite dalla DXA. Le prime componenti sono associabili ad indici macroscopici di descrizione corporea (come BMI e WHR). Queste componenti sono sorprendentemente stabili al variare dello status dei soggetti in età, sesso e nazionalità. Dati di analisi metabolica, ottenuti tramite Magnetic Resonance Spectroscopy (MRS) su campioni di urina, sono disponibili per circa mille anziani (provenienti da cinque paesi europei) di età compresa tra i 65 ed i 79 anni, non affetti da patologie gravi. I dati di composizione corporea sono altresì presenti per questi soggetti. L'algoritmo di Non-negative Matrix Factorization (NMF) è stato utilizzato per esprimere gli spettri MRS come combinazione di fattori di base interpretabili come singoli metaboliti. I fattori trovati sono stabili, quindi spettri metabolici di soggetti sono composti dallo stesso pattern di metaboliti indipendentemente dalla nazionalità. Attraverso un'analisi a singolo cieco sono stati trovati alti valori di correlazione tra le variabili di composizione corporea e lo stato metabolico dei soggetti. Ciò suggerisce la possibilità di derivare la composizione corporea dei soggetti a partire dal loro stato metabolico.
Resumo:
Molti metodi di compressione lossless si basano sulle idee che nel 1977 i ricercatori israeliani Abraham Lempel e Jacob Ziv hanno presentato nell’articolo “A universal Algorithm for sequential Data Compression”. In questa tesi viene descritto il metodo di fattorizzazione LZ77, illustrato appunto da Lempel e Ziv, e vengono esposte le strutture dati fondamentali per la sua realizzazione. Sono inoltre descritti due algoritmi CPS1 e CPS2 che realizzano LZ77. Infine, sfruttando i dati raccolti sperimentalmente da Al-Haffedh et al. in “A Comparison of Index-Based Lempel-Ziv LZ77 Factorization Algorithms” [2012], gli algoritmi descritti vengono confrontati in termini di spazio e tempo.
Resumo:
We calculate the set of O(\alpha_s) corrections to the double differential decay width d\Gamma_{77}/(ds_1 \, ds_2) for the process \bar{B} \to X_s \gamma \gamma originating from diagrams involving the electromagnetic dipole operator O_7. The kinematical variables s_1 and s_2 are defined as s_i=(p_b - q_i)^2/m_b^2, where p_b, q_1, q_2 are the momenta of b-quark and two photons. While the (renormalized) virtual corrections are worked out exactly for a certain range of s_1 and s_2, we retain in the gluon bremsstrahlung process only the leading power w.r.t. the (normalized) hadronic mass s_3=(p_b-q_1-q_2)^2/m_b^2 in the underlying triple differential decay width d\Gamma_{77}/(ds_1 ds_2 ds_3). The double differential decay width, based on this approximation, is free of infrared- and collinear singularities when combining virtual- and bremsstrahlung corrections. The corresponding results are obtained analytically. When retaining all powers in s_3, the sum of virtual- and bremstrahlung corrections contains uncanceled 1/\epsilon singularities (which are due to collinear photon emission from the s-quark) and other concepts, which go beyond perturbation theory, like parton fragmentation functions of a quark or a gluon into a photon, are needed which is beyond the scope of our paper.
Resumo:
The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning.
Resumo:
To quickly localize defects, we want our attention to be focussed on relevant failing tests. We propose to improve defect localization by exploiting dependencies between tests, using a JUnit extension called JExample. In a case study, a monolithic white-box test suite for a complex algorithm is refactored into two traditional JUnit style tests and to JExample. Of the three refactorings, JExample reports five times fewer defect locations and slightly better performance (-8-12\%), while having similar maintenance characteristics. Compared to the original implementation, JExample greatly improves maintainability due the improved factorization following the accepted test quality guidelines. As such, JExample combines the benefits of test chains with test quality aspects of JUnit style testing.
Resumo:
We evaluate the most important tree-level contributions connected with the b→uu ¯ ¯ dγ transition to the inclusive radiative decay B ¯ ¯ ¯ →X d γ using fragmentation functions. In this framework the singularities arising from collinear photon emission from the light quarks (u , u ¯ ¯ , and d ) can be absorbed into the (bare) quark-to-photon fragmentation function. We use as input the fragmentation function extracted by the ALEPH group from the two-jet cross section measured at the large electron positron (LEP) collider, where one of the jets is required to contain a photon. To get the quark-to-photon fragmentation function at the fragmentation scale μ F ∼m b , we use the evolution equation, which we solve numerically. We then calculate the (integrated) photon energy spectrum for b→uu ¯ ¯ dγ related to the operators P u 1,2 . For comparison, we also give the corresponding results when using nonzero (constituent) masses for the light quarks.
Resumo:
We study electroweak Sudakov effects in single W, Z and γ production at large transverse momentum using soft collinear effective theory. We present a factorized form of the cross section near the partonic threshold with both QCD and electroweak effects included and compute the electroweak corrections arising at different scales. We analyze their size relative to the QCD corrections as well as the impact of strong-electroweak mixing terms. Numerical results for the vector-boson cross sections at the Large Hadron Collider are presented.
Resumo:
We obtain the next-to-next-to-leading order corrections to transverse-momentum spectra of W, Z and Higgs bosons near the partonic threshold. In the threshold limit, the electroweak boson recoils against a low-mass jet and all radiation is either soft, or collinear to the jet or the beam directions. We extract the virtual corrections from known results for the relevant two-loop four-point amplitudes and combine them with the soft and collinear two-loop functions as defined in Soft-Collinear Effective Theory. We have implemented these results in a public code PeTeR and present numerical results for the threshold resummed cross section of W and Z bosons at next-to-next-to-next-to-leading logarithmic accuracy, matched to next-to-leading fixed-order perturbation theory. The two-loop corrections lead to a moderate increase in the cross section and reduce the scale uncertainty by about a factor of two. The corrections are significantly larger for Higgs production.
Resumo:
In the recently proposed framework of hard pion chiral perturbation theory, the leading chiral logarithms are predicted to factorize with respect to the energy dependence in the chiral limit. We have scrutinized this assumption in the case of vector and scalar pion form factors FV;S(s) by means of standard chiral perturbation theory and dispersion relations. We show that this factorization property is valid for the elastic contribution to the dispersion integrals for FV;S(s) but it is violated starting at three loops when the inelastic four-pion contributions arise.
Resumo:
In e+e− event shapes studies at LEP, two different measurements were sometimes performed: a “calorimetric” measurement using both charged and neutral particles and a “track-based” measurement using just charged particles. Whereas calorimetric measurements are infrared and collinear safe, and therefore calculable in perturbative QCD, track-based measurements necessarily depend on nonperturbative hadronization effects. On the other hand, track-based measurements typically have smaller experimental uncertainties. In this paper, we present the first calculation of the event shape “track thrust” and compare to measurements performed at ALEPH and DELPHI. This calculation is made possible through the recently developed formalism of track functions, which are nonperturbative objects describing how energetic partons fragment into charged hadrons. By incorporating track functions into soft-collinear effective theory, we calculate the distribution for track thrust with next-to-leading logarithmic resummation. Due to a partial cancellation between nonperturbative parameters, the distributions for calorimeter thrust and track thrust are remarkably similar, a feature also seen in LEP data.
Resumo:
We use a fracture mechanics model to study subcritical propagation and coalescence of single and collinear oil-filled cracks during conversion of kerogen to oil. The subcritical propagation distance, propagation duration, crack coalescence and excess oil pressure in the crack are determined using the fracture mechanics model together with the kinetics of kerogen-oil transformation. The propagation duration for the single crack is governed by the transformation kinetics whereas the propagation duration for the multiple collinear cracks may vary by two orders of magnitude depending on initial crack spacing. A large amount of kerogen (>90%) remains unconverted when the collinear cracks coalesce and the new, larger cracks resulting from coalescence will continue to propagate with continued kerogen-oil conversion. The excess oil pressure on the crack surfaces drops precipitously when the collinear cracks are about to coalesce, and crack propagation duration and oil pressure on the crack surfaces are strongly dependent on temperature. Citation: Jin, Z.-H., S. E. Johnson, and Z. Q. Fan (2010), Subcritical propagation and coalescence of oil-filled cracks: Getting the oil out of low-permeability source rocks, Geophys. Res. Lett., 37, L01305, doi:10.1029/2009GL041576.
Resumo:
We calculate the O(αs) corrections to the double differential decay width dΓ77/(ds1ds2) for the process B¯→Xsγγ, originating from diagrams involving the electromagnetic dipole operator O7. The kinematical variables s1 and s2 are defined as si=(pb−qi)2/m2b, where pb, q1, q2 are the momenta of the b quark and two photons. We introduce a nonzero mass ms for the strange quark to regulate configurations where the gluon or one of the photons become collinear with the strange quark and retain terms which are logarithmic in ms, while discarding terms which go to zero in the limit ms→0. When combining virtual and bremsstrahlung corrections, the infrared and collinear singularities induced by soft and/or collinear gluons drop out. By our cuts the photons do not become soft, but one of them can become collinear with the strange quark. This implies that in the final result a single logarithm of ms survives. In principle, the configurations with collinear photon emission could be treated using fragmentation functions. In a related work we find that similar results can be obtained when simply interpreting ms appearing in the final result as a constituent mass. We do so in the present paper and vary ms between 400 and 600 MeV in the numerics. This work extends a previous paper by us, where only the leading power terms with respect to the (normalized) hadronic mass s3=(pb−q1−q2)2/m2b were taken into account in the underlying triple differential decay width dΓ77/(ds1ds2ds3).