332 resultados para incoherent correlator
Resumo:
The photons scattered by the Compton effect can be used to characterize the physical properties of a given sample due to the influence that the electron density exerts on the number of scattered photons. However, scattering measurements involve experimental and physical factors that must be carefully analyzed to predict uncertainty in the detection of Compton photons. This paper presents a method for the optimization of the geometrical parameters of an experimental arrangement for Compton scattering analysis, based on its relations with the energy and incident flux of the X-ray photons. In addition, the tool enables the statistical analysis of the information displayed and includes the coefficient of variation (CV) measurement for a comparative evaluation of the physical parameters of the model established for the simulation. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a method to order low temperature (LT) self-assembled ferromagnetic In1-xMnxAs quantum dots (QDs) grown by molecular beam epitaxy (MBE). The ordered In1-xMnxAs QDs were grown on top of a non-magnetic In0.4Ga0.6As/GaAs(100) QDs multi-layered structure. The modulation of the chemical potential, due to the stacking, provides a nucleation center for the LT In1-xMnxAs QDs. For particular conditions, such as surface morphology and growth conditions, the In1-xMnxAs QDs align along lines like chains. This work also reports the characterization of QDs grown on plain GaAs(100) substrates, as well as of the ordered structures, as function of Mn content and growth temperature. The substitutional Mn incorporation in the InAs lattice and the conditions for obtaining coherent and incoherent structures are discussed from comparison between Raman spectroscopy and x-ray analysis. Ferromagnetic behavior was observed for all structures at 2K. We found that the magnetic moment axis changes from [110] in In1-xMnxAs over GaAs to [1-10] for the ordered In1-xMnxAs grown over GaAs template. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4745904]
Resumo:
In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Lambda = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.
Resumo:
In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.
Resumo:
Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.
Resumo:
Im Rahmen dieser Arbeit wurde die zeitaufgelöste Photoemissions Elektronenmikroskopie (TR-PEEM) für die in-situ Untersuchung ultraschneller dynamischer Prozesse in dünnen mikrostrukturierten magnetischen Schichten während eines rasch verändernden externen Magnetfelds entwickelt. Das Experiment basiert auf der Nutzung des XMCD-Kontrasts (X-ray magnetic circular dichroism) mit Hilfe des zirkularpolarisierten Lichts von Synchrotronstrahlungsquellen (Elektronenspeicherringen BESSY II (Berlin) und ESRF (Grenoble)) für die dynamische Darstellung der magnetischen Domänen während ultraschneller Magnetisierungsvorgänge. Die hier entwickelte Methode wurde als erfolgreiche Kombination aus einer hohen Orts- und Zeitauflösung (weniger als 55 nm bzw. 15 ps) realisiert. Mit der hier beschriebenen Methode konnte nachgewiesen werden, dass die Magnetisierungsdynamik in großen Permalloy-Mikrostrukturen (40 µm x 80 µm und 20 µm x 80 µm, 40 nm dick) durch inkohärente Drehung der Magnetisierung und mit der Bildung von zeitlich abhängigen Übergangsdomänen einher geht, die den Ummagnetisierungsvorgang blockieren. Es wurden neue markante Differenzen zwischen der magnetischen Response einer vorgegebenen Dünnfilm-Mikrostruktur auf ein gepulstes externes Magnetfeld im Vergleich zu dem quasi-statischen Fall gefunden. Dies betrifft die Erscheinung von transienten raumzeitlichen Domänenmustern und besonderen Detailstrukturen in diesen Mustern, welche im quasi-statischen Fall nicht auftreten. Es wurden Beispiele solcher Domänenmuster in Permalloy-Mikrostrukturen verschiedener Formen und Größen untersucht und diskutiert. Insbesondere wurde die schnelle Verbreiterung von Domänenwänden infolge des präzessionalen Magnetisierungsvorgangs, die Ausbildung von transienten Domänenwänden und transienten Vortizes sowie die Erscheinung einer gestreiften Domänenphase aufgrund der inkohärenten Drehung der Magnetisierung diskutiert. Ferner wurde die Methode für die Untersuchung von stehenden Spinwellen auf ultradünnen (16 µm x 32 µm groß und 10 nm dick) Permalloy-Mikrostrukturen herangezogen. In einer zum periodischen Anregungsfeld senkrecht orientierten rechteckigen Mikrostruktur wurde ein induziertes magnetisches Moment gefunden. Dieses Phänomen wurde als „selbstfangende“ Spinwellenmode interpretiert. Es wurde gezeigt, dass sich eine erzwungene Normalmode durch Verschiebung einer 180°-Néelwand stabilisiert. Wird das System knapp unterhalb seiner Resonanzfrequenz angeregt, passt sich die Magnetisierungsverteilung derart an, dass ein möglichst großer Teil der durch das Anregungsfeld eingebrachten Energie im System verbleibt. Über einem bestimmten Grenzwert verursacht die Spinwellenmode nahe der Resonanzfrequenz eine effektive Kraft senkrecht zur 180°-Néel-Wand. Diese entsteht im Zentrum der Mikrostruktur und wird durch die streufeldinduzierte Kraft kompensiert. Als zusätzliche Möglichkeit wurden die Streufelder von magnetischen Mikrostrukturen während der dynamischen Prozesse quantitativ bestimmt und das genaue zeitliche Profil des Streufelds untersucht. Es wurde gezeigt, dass das zeitaufgelöste Photoemissions Elektronenmikroskop als ultraschnelles oberflächensensitives Magnetometer eingesetzt werden kann.
Resumo:
This thesis proposes design methods and test tools, for optical systems, which may be used in an industrial environment, where not only precision and reliability but also ease of use is important. The approach to the problem has been conceived to be as general as possible, although in the present work, the design of a portable device for automatic identification applications has been studied, because this doctorate has been funded by Datalogic Scanning Group s.r.l., a world-class producer of barcode readers. The main functional components of the complete device are: electro-optical imaging, illumination and pattern generator systems. For what concerns the electro-optical imaging system, a characterization tool and an analysis one has been developed to check if the desired performance of the system has been achieved. Moreover, two design tools for optimizing the imaging system have been implemented. The first optimizes just the core of the system, the optical part, improving its performance ignoring all other contributions and generating a good starting point for the optimization of the whole complex system. The second tool optimizes the system taking into account its behavior with a model as near as possible to reality including optics, electronics and detection. For what concerns the illumination and the pattern generator systems, two tools have been implemented. The first allows the design of free-form lenses described by an arbitrary analytical function exited by an incoherent source and is able to provide custom illumination conditions for all kind of applications. The second tool consists of a new method to design Diffractive Optical Elements excited by a coherent source for large pattern angles using the Iterative Fourier Transform Algorithm. Validation of the design tools has been obtained, whenever possible, comparing the performance of the designed systems with those of fabricated prototypes. In other cases simulations have been used.
Resumo:
Ultra-relativistic heavy ions generate strong electromagnetic fields which offer the possibility to study γ-γ and γ-nucleus processes at the LHC in the so called ultra-peripheral collisions (UPC). The photoproduction of J/ψ vector mesons in UPC is sensitive to the gluon distribution of the interacting nuclei. In this thesis the study of coherent and incoherent J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV is described. The J/ψ has been measured via its leptonic decay in the rapidity range -0.9 < y < 0.9. The cross section for coherent and incoherent J/ψ are given. The results are compared to theoretical models for J/ψ production and the coherent cross section is found to be in good agreement with those models which include nuclear gluon shadowing consistent with EPS09 parametrization. In addition the cross section for the process γ γ→ e+e− has been measured and found to be in agreement with the STARLIGHT Monte Carlo predictions. The analysis has been published by the ALICE Collaboration in the European Physical Journal C, with one of its main plot depicted on the cover-front of the November 2013 issue.
Resumo:
Die vorliegende Dissertation dient dazu, das Verständnis des Exzitonentransports in organischen Halbleitern, wie sie in Leuchtdioden oder Solarzellen eingesetzt werden, zu vertiefen. Mithilfe von Computersimulationen wurde der Transport von Exzitonen in amorphen und kristallinen organischen Materialien beschrieben, angefangen auf mikroskopischer Ebene, auf der quantenmechanische Prozesse ablaufen, bis hin zur makroskopischen Ebene, auf welcher physikalisch bestimmbare Größen wie der Diffusionskoeffizient extrahierbar werden. Die Modellbildung basiert auf dem inkohärenten elektronischen Energietransfer. In diesem Rahmen wird der Transport des Exzitons als Hüpfprozess aufgefasst, welcher mit kinetischen Monte-Carlo Methoden simuliert wurde. Die notwendigen quantenmechanischen Übergangsraten zwischen den Molekülen wurden anhand der molekularen Struktur fester Phasen berechnet. Die Übergangsraten lassen sich in ein elektronisches Kopplungselement und die Franck-Condon-gewichtete Zustandsdichte aufteilen. Der Fokus dieser Arbeit lag einerseits darauf die Methoden zu evaluieren, die zur Berechnung der Übergangsraten in Frage kommen und andererseits den Hüpftransport zu simulieren und eine atomistische Interpretation der makroskopischen Transporteigenschaften der Exzitonen zu liefern. rnrnVon den drei untersuchten organischen Systemen, diente Aluminium-tris-(8-hydroxychinolin) der umfassenden Prüfung des Verfahrens. Es wurde gezeigt, dass stark vereinfachte Modelle wie die Marcus-Theorie die Übergangsraten und damit das Transportverhalten der Exzitonen oftmals qualitativ korrekt wiedergeben. Die meist deutlich größeren Diffusionskonstanten von Singulett- im Vergleich zu Triplett-Exzitonen haben ihren Ursprung in der längeren Reichweite der Kopplungselemente der Singulett-Exzitonen, wodurch ein stärker verzweigtes Netzwerk gebildet wird. Der Verlauf des zeitabhängigen Diffusionskoeffizienten zeigt subdiffusives Verhalten für kurze Beobachtungszeiten. Für Singulett-Exzitonen wechselt dieses Verhalten meist innerhalb der Lebensdauer des Exzitons in ein normales Diffusionsregime, während Triplett-Exzitonen das normale Regime deutlich langsamer erreichen. Das stärker anomale Verhalten der Triplett-Exzitonen wird auf eine ungleichmäßige Verteilung der Übergangsraten zurückgeführt. Beim Vergleich mit experimentell bestimmten Diffusionskonstanten muss das anomale Verhalten der Exzitonen berücksichtigt werden. Insgesamt stimmten simulierte und experimentelle Diffusionskonstanten für das Testsystem gut überein. Das Modellierungsverfahren sollte sich somit zur Charakterisierung des Exzitonentransports in neuen organischen Halbleitermaterialien eignen.
Resumo:
One of the fundamental interactions in the Standard Model of particle physicsrnis the strong force, which can be formulated as a non-abelian gauge theoryrncalled Quantum Chromodynamics (QCD). rnIn the low-energy regime, where the QCD coupling becomes strong and quarksrnand gluons are confined to hadrons, a perturbativernexpansion in the coupling constant is not possible.rnHowever, the introduction of a four-dimensional Euclidean space-timernlattice allows for an textit{ab initio} treatment of QCD and provides arnpowerful tool to study the low-energy dynamics of hadrons.rnSome hadronic matrix elements of interest receive contributionsrnfrom diagrams including quark-disconnected loops, i.e. disconnected quarkrnlines from one lattice point back to the same point. The calculation of suchrnquark loops is computationally very demanding, because it requires knowledge ofrnthe all-to-all propagator. In this thesis we use stochastic sources and arnhopping parameter expansion to estimate such propagators.rnWe apply this technique to study two problems which relay crucially on therncalculation of quark-disconnected diagrams, namely the scalar form factor ofrnthe pion and the hadronic vacuum polarization contribution to the anomalousrnmagnet moment of the muon.rnThe scalar form factor of the pion describes the coupling of a charged pion torna scalar particle. We calculate the connected and the disconnected contributionrnto the scalar form factor for three different momentum transfers. The scalarrnradius of the pion is extracted from the momentum dependence of the form factor.rnThe use ofrnseveral different pion masses and lattice spacings allows for an extrapolationrnto the physical point. The chiral extrapolation is done using chiralrnperturbation theory ($chi$PT). We find that our pion mass dependence of thernscalar radius is consistent with $chi$PT at next-to-leading order.rnAdditionally, we are able to extract the low energy constant $ell_4$ from thernextrapolation, and ourrnresult is in agreement with results from other lattice determinations.rnFurthermore, our result for the scalar pion radius at the physical point isrnconsistent with a value that was extracted from $pipi$-scattering data. rnThe hadronic vacuum polarization (HVP) is the leading-order hadronicrncontribution to the anomalous magnetic moment $a_mu$ of the muon. The HVP canrnbe estimated from the correlation of two vector currents in the time-momentumrnrepresentation. We explicitly calculate the corresponding disconnectedrncontribution to the vector correlator. We find that the disconnectedrncontribution is consistent with zero within its statistical errors. This resultrncan be converted into an upper limit for the maximum contribution of therndisconnected diagram to $a_mu$ by using the expected time-dependence of therncorrelator and comparing it to the corresponding connected contribution. Wernfind the disconnected contribution to be smaller than $approx5%$ of thernconnected one. This value can be used as an estimate for a systematic errorrnthat arises from neglecting the disconnected contribution.rn
Resumo:
We study a homogeneously driven granular fluid of hard spheres at intermediate volume fractions and focus on time-delayed correlation functions in the stationary state. Inelastic collisions are modeled by incomplete normal restitution, allowing for efficient simulations with an event-driven algorithm. The incoherent scattering function Fincoh(q,t ) is seen to follow time-density superposition with a relaxation time that increases significantly as the volume fraction increases. The statistics of particle displacements is approximately Gaussian. For the coherent scattering function S(q,ω), we compare our results to the predictions of generalized fluctuating hydrodynamics, which takes into account that temperature fluctuations decay either diffusively or with a finite relaxation rate, depending on wave number and inelasticity. For sufficiently small wave number q we observe sound waves in the coherent scattering function S(q,ω) and the longitudinal current correlation function Cl(q,ω). We determine the speed of sound and the transport coefficients and compare them to the results of kinetic theory.
Resumo:
The fracture properties of high-strength spray-formed Al alloys were investigated, with consideration of the effects of elemental additions such as zinc,manganese, and chromium and the influence of the addition of SiC particulate. Fracture resistance values between 13.6 and 25.6 MPa (m)1/2 were obtained for the monolithic alloys in the T6 and T7 conditions, respectively. The alloys with SiC particulate compared well and achieved fracture resistance values between 18.7 and 25.6 MPa (m)1/2. The spray-formed materials exhibited a loss in fracture resistance (KI) compared to ingot metallurgy 7075 alloys but had an improvedperformance compared to high-solute powder metallurgy alloys of similar composition. Characterization of the fracture surfaces indicated a predominantly intergranular decohesion, possibly facilitated by the presence of incoherent particles at the grain boundary regions and by the large strength differentialbetween the matrix and precipitate zone. It is believed that at the slip band-grain boundary intersection, particularly in the presence of large dispersoids and/or inclusions, microvoid nucleation would be significantly enhanced. Differences in fracture surfaces between the alloys in the T6 and T7 condition were observed and are attributed to inhomogeneous slip distribution, which results in strain localization at grain boundaries. The best overall combination of fracture resistance properties were obtained for alloys with minimum amounts of chromium and manganese additions.
Resumo:
Magnetic resonance spectroscopy enables insight into the chemical composition of spinal cord tissue. However, spinal cord magnetic resonance spectroscopy has rarely been applied in clinical work due to technical challenges, including strong susceptibility changes in the region and the small cord diameter, which distort the lineshape and limit the attainable signal to noise ratio. Hence, extensive signal averaging is required, which increases the likelihood of static magnetic field changes caused by subject motion (respiration, swallowing), cord motion, and scanner-induced frequency drift. To avoid incoherent signal averaging, it would be ideal to perform frequency alignment of individual free induction decays before averaging. Unfortunately, this is not possible due to the low signal to noise ratio of the metabolite peaks. In this article, frequency alignment of individual free induction decays is demonstrated to improve spectral quality by using the high signal to noise ratio water peak from non-water-suppressed proton magnetic resonance spectroscopy via the metabolite cycling technique. Electrocardiography (ECG)-triggered point resolved spectroscopy (PRESS) localization was used for data acquisition with metabolite cycling or water suppression for comparison. A significant improvement in the signal to noise ratio and decrease of the Cramér Rao lower bounds of all metabolites is attained by using metabolite cycling together with frequency alignment, as compared to water-suppressed spectra, in 13 healthy volunteers.
Resumo:
Measuring shallow seismic sources provides a way to reveal processes that cannot be directly observed, but the correct interpretation and value of these signals depend on the ability to distinguish source from propagation effects. Furthermore, seismic signals produced by a resonating source can look almost identical to those produced by impulsive sources, but modified along the path. Distinguishing these two phenomena can be accomplished by examining the wavefield with small aperture arrays or by recording seismicity near to the source when possible. We examine source and path effects in two different environments: Bering Glacier, Alaska and Villarrica Volcano, Chile. Using three 3-element seismic arrays near the terminus of the Bering Glacier, we have identified and located both terminus calving and iceberg breakup events. We show that automated array analysis provided a robust way to locate icequake events using P waves. This analysis also showed that arrivals within the long-period codas were incoherent within the small aperture arrays, demonstrating that these codas previously attributed to crack resonance were in fact a result of a complicated path rather than a source effect. At Villarrica Volcano, seismometers deployed from near the vent to ~10 km revealed that a several cycle long-period source signal recorded at the vent appeared elongated in the far-field. We used data collected from the stations nearest to the vent to invert for the repetitive seismic source, and found it corresponded to a shallow force within the lava lake oriented N75°E and dipping 7° from horizontal. We also used this repetitive signal to search the data for additional seismic and infrasonic properties which included calculating seismic-acoustic delay times, volcano acoustic-seismic ratios and energies, event frequency, and real-time seismic amplitude measurements. These calculations revealed lava lake level and activity fluctuations consistent with lava lake level changes inferred from the persistent infrasonic tremor.
Resumo:
The bulk viscosity of thermalized QCD matter at temperatures above a few hundred MeV could be significantly influenced by charm quarks because their contribution arises four perturbative orders before purely gluonic effects. In an attempt to clarify the challenges of a lattice study, we determine the relevant imaginary-time correlator (of massive scalar densities) up to NLO in perturbation theory, and compare with existing data. We find discrepancies much larger than in the vector channel; this may hint, apart from the importance of taking a continuum limit, to larger non-perturbative effects in the scalar channel. We also recall how a transport peak related to the scalar density spectral function encodes non-perturbative information concerning the charm quark chemical equilibration rate close to equilibrium.