964 resultados para PHOTON STATISTICS
Resumo:
The present work describes the development of a new body-counter system based on HPGe detectors and installed at IVM of KIT. The goal, achieved, was the improvement of the ability to detect internal contaminations in the human body, especially the ones concerning low-energy emitters and multiple nuclides. The development of the system started with the characterisation of detectors purchased for this specific task, with the optimisation of the different desired measurement configurations following and ending with the installation and check of the results. A new software has been developed to handle the new detectors.
Resumo:
The Spin-Statistics theorem states that the statistics of a system of identical particles is determined by their spin: Particles of integer spin are Bosons (i.e. obey Bose-Einstein statistics), whereas particles of half-integer spin are Fermions (i.e. obey Fermi-Dirac statistics). Since the original proof by Fierz and Pauli, it has been known that the connection between Spin and Statistics follows from the general principles of relativistic Quantum Field Theory. In spite of this, there are different approaches to Spin-Statistics and it is not clear whether the theorem holds under assumptions that are different, and even less restrictive, than the usual ones (e.g. Lorentz-covariance). Additionally, in Quantum Mechanics there is a deep relation between indistinguishabilty and the geometry of the configuration space. This is clearly illustrated by Gibbs' paradox. Therefore, for many years efforts have been made in order to find a geometric proof of the connection between Spin and Statistics. Recently, various proposals have been put forward, in which an attempt is made to derive the Spin-Statistics connection from assumptions different from the ones used in the relativistic, quantum field theoretic proofs. Among these, there is the one due to Berry and Robbins (BR), based on the postulation of a certain single-valuedness condition, that has caused a renewed interest in the problem. In the present thesis, we consider the problem of indistinguishability in Quantum Mechanics from a geometric-algebraic point of view. An approach is developed to study configuration spaces Q having a finite fundamental group, that allows us to describe different geometric structures of Q in terms of spaces of functions on the universal cover of Q. In particular, it is shown that the space of complex continuous functions over the universal cover of Q admits a decomposition into C(Q)-submodules, labelled by the irreducible representations of the fundamental group of Q, that can be interpreted as the spaces of sections of certain flat vector bundles over Q. With this technique, various results pertaining to the problem of quantum indistinguishability are reproduced in a clear and systematic way. Our method is also used in order to give a global formulation of the BR construction. As a result of this analysis, it is found that the single-valuedness condition of BR is inconsistent. Additionally, a proposal aiming at establishing the Fermi-Bose alternative, within our approach, is made.
Resumo:
Neuronal circuits in the retina analyze images according to qualitative aspects such as color or motion, before the information is transmitted to higher visual areas of the brain. One example, studied for over the last four decades, is the detection of motion direction in ‘direction selective’ neurons. Recently, the starburst amacrine cell, one type of retinal interneuron, has emerged as an essential player in the computation of direction selectivity. In this study the mechanisms underlying the computation of direction selective calcium signals in starburst cell dendrites were investigated using whole-cell electrical recordings and two-photon calcium imaging. Analysis of the somatic electrical responses to visual stimulation and pharmacological agents indicated that the directional signal (i) is not computed presynaptically to starburst cells or by inhibitory network interactions. It is thus computed via a cell-intrinsic mechanism, which (ii) depends upon the differential, i.e. direction selective, activation of voltage-gated channels. Optically measuring dendritic calcium signals as a function of somatic voltage suggests (iii) a difference in resting membrane potential between the starburst cell’s soma and its distal dendrites. In conclusion, it is proposed that the mechanism underlying direction selectivity in starburst cell dendrites relies on intrinsic properties of the cell, particularly on the interaction of spatio-temporally structured synaptic inputs with voltage-gated channels, and their differential activation due to a somato-dendritic difference in membrane potential.
Resumo:
Throughout the twentieth century statistical methods have increasingly become part of experimental research. In particular, statistics has made quantification processes meaningful in the soft sciences, which had traditionally relied on activities such as collecting and describing diversity rather than timing variation. The thesis explores this change in relation to agriculture and biology, focusing on analysis of variance and experimental design, the statistical methods developed by the mathematician and geneticist Ronald Aylmer Fisher during the 1920s. The role that Fisher’s methods acquired as tools of scientific research, side by side with the laboratory equipment and the field practices adopted by research workers, is here investigated bottom-up, beginning with the computing instruments and the information technologies that were the tools of the trade for statisticians. Four case studies show under several perspectives the interaction of statistics, computing and information technologies, giving on the one hand an overview of the main tools – mechanical calculators, statistical tables, punched and index cards, standardised forms, digital computers – adopted in the period, and on the other pointing out how these tools complemented each other and were instrumental for the development and dissemination of analysis of variance and experimental design. The period considered is the half-century from the early 1920s to the late 1960s, the institutions investigated are Rothamsted Experimental Station and the Galton Laboratory, and the statisticians examined are Ronald Fisher and Frank Yates.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.
Resumo:
In this thesis two related arguments are investigated: - The first stages of the process of massive star formation, investigating the physical conditions and -properties of massive clumps in different evolutionary stages, and their CO depletion; - The influence that high-mass stars have on the nearby material and on the activity of star formation. I characterise the gas and dust temperature, mass and density of a sample of massive clumps, and analyse the variation of these properties from quiescent clumps, without any sign of active star formation, to clumps likely hosting a zero-age main sequence star. I briefly discuss CO depletion and recent observations of several molecular species, tracers of Hot Cores and/or shocked gas, of a subsample of these clumps. The issue of CO depletion is addressed in more detail in a larger sample consisting of the brightest sources in the ATLASGAL survey: using a radiative tranfer code I investigate how the depletion changes from dark clouds to more evolved objects, and compare its evolution to what happens in the low-mass regime. Finally, I derive the physical properties of the molecular gas in the photon-dominated region adjacent to the HII region G353.2+0.9 in the vicinity of Pismis 24, a young, massive cluster, containing some of the most massive and hottest stars known in our Galaxy. I derive the IMF of the cluster and study the star formation activity in its surroundings. Much of the data analysis is done with a Bayesian approach. Therefore, a separate chapter is dedicated to the concepts of Bayesian statistics.
Resumo:
Im Juli 2009 wurde am Mainzer Mikrotron (MAMI) erstmal ein Experiment durchgeführt, bei dem ein polarisiertes 3He Target mit Photonen im Energiebereich von 200 bis 800 MeV untersucht wurde. Das Ziel dieses Experiments war die Überprüfung der Gerasimov-Drell-Hearn Summenregel am Neutron. Die Verwendung der Messdaten welche mit dem polarisierten 3He Target gewonnen wurden, geben - im Vergleich mit den bereits existieren Daten vom Deuteron - aufgrund der Spin-Struktur des 3He einen komplementären und direkteren Zugang zum Neutron. Die Messung des totalen helizitätsabhängigen Photoabsorptions-Wirkungsquerschnitts wurde mittels eines energiemarkierten Strahls von zirkular polarisierten Photonen, welcher auf das longitudinal polarisierte 3He Target trifft, durchgeführt. Als Produktdetektoren kamen der Crystal Ball (4π Raumabdeckung), TAPS (als ”Vorwärtswand”) sowie ein Schwellen-Cherenkov-Detektor (online Veto zur Reduktion von elektromagnetischen Ereignissen) zum Einsatz. Planung und Aufbau der verschiedenen komponenten Teile des 3He Experimentaufbaus war ein entscheidender Teil dieser Dissertation und wird detailliert in der vorliegenden Arbeit beschrieben. Das Detektorsystem als auch die Analyse-Methoden wurden durch die Messung des unpolarisierten, totalen und inklusiven Photoabsoprtions-Wirkungsquerschnitts an flüssigem Wasserstoff getestet. Hierbei zeigten die Ergebnisse eine gute Übereinstimmung mit bereits zuvor publizierten Daten. Vorläufige Ergebnisse des unpolarisierten totalen Photoabsorptions-Wirkungsquerschnitts sowie der helizitätsabhängige Unterschied zwischen Photoabsorptions-Wirkungsquerschnitten an 3He im Vergleich zu verschiedenen theoretischen Modellen werden vorgestellt.
Resumo:
This thesis was focused on the investigation of the linear optical properties of novel two photon absorbers for biomedical applications. Substituted imidazole and imidazopyridine derivatives, and organic dendrimers were studied as potential fluorophores for two photon bioimaging. The results obtained showed superior luminescence properties for sulphonamido imidazole derivatives compared to other substituted imidazoles. Imidazo[1,2-a]pyridines exhibited an important dependence on the substitution pattern of their luminescence properties. Substitution at imidazole ring led to a higher fluorescence yield than the substitution at the pyridine one. Bis-imidazo[1,2-a]pyridines of Donor-Acceptor-Donor type were examined. Bis-imidazo[1,2-a]pyridines dimerized at C3 position had better luminescence properties than those dimerized at C5, displaying high emission yields and important 2PA cross sections. Phosphazene-based dendrimers with fluorene branches and cationic charges on the periphery were also examined. Due to aggregation phenomena in polar solvents, the dendrimers registered a significant loss of luminescence with respect to fluorene chromophore model. An improved design of more rigid chromophores yields enhanced luminescence properties which, connected to large 2PA cross-sections, make this compounds valuable as fluorophores in bioimaging. The photophysical study of several ketocoumarine initiators, designed for the fabrication of small dimension prostheses by two photon polymerization (2PP) was carried out. The compounds showed low emission yields, indicative of a high population of the triplet excited state, which is the active state in producing the reactive species. Their efficiency in 2PP was proved by fabrication of microstructures and their biocompatibility was tested in the collaborator’s laboratory. In the frame of the 2PA photorelease of drugs, three fluorene-based dyads have been investigated. They were designed to release the gamma-aminobutyric acid via two photon induced electron transfer. The experimental data in polar solvents showed a fast electron transfer followed by an almost equally fast back electron transfer process, which indicate a poor optimization of the system.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.
Resumo:
In high energy teletherapy, VMC++ is known to be a very accurate and efficient Monte Carlo (MC) code. In principle, the MC method is also a powerful dose calculation tool in other areas in radiation oncology, e.g., brachytherapy or orthovoltage radiotherapy. However, VMC++ is not validated for the low-energy range of such applications. This work aims in the validation of the VMC++ MC code for photon beams in the energy range between 20 and 1000 keV.
Comparison of monte carlo collimator transport methods for photon treatment planning in radiotherapy
Resumo:
The aim of this work was a Monte Carlo (MC) based investigation of the impact of different radiation transport methods in collimators of a linear accelerator on photon beam characteristics, dose distributions, and efficiency. Thereby it is investigated if it is possible to use different simplifications in the radiation transport for some clinical situations in order to save calculation time.
Resumo:
To evaluate the capability of spectral computed tomography (CT) to improve the characterization of cystic high-attenuation lesions in a renal phantom and to test the hypothesis that spectral CT will improve the differentiation of cystic renal lesions with high protein content and those that have undergone hemorrhage or malignant contrast-enhancing transformation.