992 resultados para RELATIVISTIC JETS
Resumo:
The origin of the extragalactic gamma-ray background (EGRB) is still an open question, even nearly forty years after its discovery. The emission could originate either from truly diffuse processes or from unresolved point sources. Although the majority of the 271 point sources detected by EGRET (Energetic Gamma Ray Experiment Telescope) are unidentified, of the identified sources, blazars are the dominant candidates. Therefore, unresolved blazars may be considered the main contributor to the EGRB, and many studies have been carried out to understand their distribution, evolution and contribution to the EGRB. Considering that gamma-ray emission comes mostly from jets of blazars and that the jet emission decreases rapidly with increasing jet to line-of-sight angle, it is not surprising that EGRET was not able to detect many large inclination angle active galactic nuclei (AGNs). Though Fermi could only detect a few large inclination angle AGNs during the first three months of its survey, it is expected to detect many such sources in the near future. Since non-blazar AGNs are expected to have higher density as compared to blazars, these could also contribute significantly to the EGRB. In this paper, we discuss contributions from unresolved discrete sources including normal galaxies, starburst galaxies, blazars and off-axis AGNs to the EGRB.
Resumo:
The accretion disk around a compact object is a nonlinear general relativistic system involving magnetohydrodynamics. Naturally, the question arises whether such a system is chaotic (deterministic) or stochastic (random) which might be related to the associated transport properties whose origin is still not confirmed. Earlier, the black hole system GRS 1915+105 was shown to be low-dimensional chaos in certain temporal classes. However, so far such nonlinear phenomena have not been studied fairly well for neutron stars which are unique for their magnetosphere and kHz quasi-periodic oscillation (QPO). On the other hand, it was argued that the QPO is a result of nonlinear magnetohydrodynamic effects in accretion disks. If a neutron star exhibits chaotic signature, then what is the chaotic/correlation dimension? We analyze RXTE/PCA data of neutron stars Sco X-1 and Cyg X-2, along with the black hole Cyg X-1 and the unknown source Cyg X-3, and show that while Sco X-1 and Cyg X-2 are low dimensional chaotic systems, Cyg X-1 and Cyg X-3 are stochastic sources. Based on our analysis, we argue that Cyg X-3 may be a black hole.
Resumo:
Characteristics of the process of entrainment in plane mixing layers, and the changes with compressibility and heat release, were studied using temporal DNS with simultaneous fluid packet tracking. Convective Mach numbers of the simulations are 0.15, 0.7 and 1.1. The Reynolds number is quite high (between 11 000 and 37 000 based on layer width and velocity difference), and is above the mixing transition. The study agrees with recent findings in round jets: first, engulfed fluid volume and its growth rate are both very small compared with the volume of the turbulent region and its growth rate, respectively. Secondly, most often, the process occurs close to the turbulent-nonturbulent boundaries. A new finding is that both compressibility and heat release retard the entrainment process so that it takes an O(1) time for vorticity or scalar levels to grow even after growth has been initiated. This delay is manifested as the fall in mixing layer growth rates as compressibility and heat release levels increase.
Resumo:
An accretion flow is necessarily transonic around a black hole. However, around a neutron star it may or may not be transonic, depending on the inner disk boundary conditions influenced by the neutron star. I will discuss various transonic behavior of the disk fluid in general relativistic (or pseudo general relativistic) framework. I will address that there are four types of sonic/critical point. possible to form in an accretion disk. It will be shown that how the fluid properties including location of sonic point's vary with angular momentum of the compact object which controls the overall disk dynamics and outflows.
Resumo:
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.
Resumo:
New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.
Resumo:
The mechanism by which outflows and plausible jets are driven from black hole systems still remains observationally elusive. This notwithstanding, several observational evidences and deeper theoretical insights reveal that accretion and outflow/jet are strongly correlated. We model an advective disk-outflow coupled dynamics, incorporating explicitly the vertical flux. Inter-connecting dynamics of outflow andaccretion essentially upholds the conservation laws. We investigate the properties of the disk-outflow surface and its strong dependence on the rotation parameter of the black hole. The energetics of the disk outflow strongly depend on the mass, accretion rate, and spin of the black holes. The model clearly shows that the outflow power extracted from the disk increases strongly with the spin of the black hole, inferring that the power of the observed astrophysical jets has a proportional correspondence with the spin of the central object. In the case of blazars (BL Lacs and flat spectrum radio quasars, FSRQs), most of their emission are believed to be originated from their jets. It is observed that BL Lacs are relatively low luminous than FSRQs. The luminosity might be linked to the power of the jet, which in turn reflects that the nuclear regions of the BL Lac objects have a relatively low spinning black hole compared to that in the case of FSRQs. If extreme gravity is the source that powers strong outflows and jets, then the spin of the black hole, perhaps, might be the fundamental parameter to account for the observed astrophysical processes in an accretion powered system.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.
Resumo:
When heated to high temperatures, the behavior of matter changes dramatically. The standard model fields go through phase transitions, where the strongly interacting quarks and gluons are liberated from their confinement to hadrons, and the Higgs field condensate melts, restoring the electroweak symmetry. The theoretical framework for describing matter at these extreme conditions is thermal field theory, combining relativistic field theory and quantum statistical mechanics. For static observables the physics is simplified at very high temperatures, and an effective three-dimensional theory can be used instead of the full four-dimensional one via a method called dimensional reduction. In this thesis dimensional reduction is applied to two distinct problems, the pressure of electroweak theory and the screening masses of mesonic operators in quantum chromodynamics (QCD). The introductory part contains a brief review of finite-temperature field theory, dimensional reduction and the central results, while the details of the computations are contained in the original research papers. The electroweak pressure is shown to converge well to a value slightly below the ideal gas result, whereas the pressure of the full standard model is dominated by the QCD pressure with worse convergence properties. For the mesonic screening masses a small positive perturbative correction is found, and the interpretation of dimensional reduction on the fermionic sector is discussed.
Resumo:
We present three measurements of the top-quark mass in the lepton plus jets channel with approximately 1.9 fb-1 of integrated luminosity collected with the CDF II detector using quantities with minimal dependence on the jet energy scale. One measurement exploits the transverse decay length of b-tagged jets to determine a top-quark mass of 166.9+9.5-8.5 (stat) +/- 2.9 (syst) GeV/c2, and another the transverse momentum of electrons and muons from W-boson decays to determine a top-quark mass of 173.5+8.8-8.9 (stat) +/- 3.8 (syst) GeV/c2. These quantities are combined in a third, simultaneous mass measurement to determine a top-quark mass of 170.7 +/- 6.3 (stat) +/- 2.6 (syst) GeV/c2.
Resumo:
"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."
Resumo:
We present a search for the Higgs boson in the process $q\bar{q} \to ZH \to \ell^+\ell^- b\bar{b}$. The analysis uses an integrated luminosity of 1 fb$^{-1}$ of $p\bar{p}$ collisions produced at $\sqrt{s} =$ 1.96 TeV and accumulated by the upgraded Collider Detector at Fermilab (CDF II). We employ artificial neural networks both to correct jets mismeasured in the calorimeter, and to distinguish the signal kinematic distributions from those of the background. We see no evidence for Higgs boson production, and set 95% CL upper limits on $\sigma_{ZH} \cdot {\cal B}(H \to b\bar{b}$), ranging from 1.5 pb to 1.2 pb for a Higgs boson mass ($m_H$) of 110 to 150 GeV/$c^2$.
Resumo:
A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
Resumo:
We present a measurement of the electric charge of the top quark using $\ppbar$ collisions corresponding to an integrated luminosity of 2.7~fb$^{-1}$ at the CDF II detector. We reconstruct $\ttbar$ events in the lepton+jets final state and use kinematic information to determine which $b$-jet is associated with the leptonically- or hadronically-decaying $t$-quark. Soft lepton taggers are used to determine the $b$-jet flavor. Along with the charge of the $W$ boson decay lepton, this information permits the reconstruction of the top quark's electric charge. Out of 45 reconstructed events with $2.4\pm0.8$ expected background events, 29 are reconstructed as $\ttbar$ with the standard model $+$2/3 charge, whereas 16 are reconstructed as $\ttbar$ with an exotic $-4/3$ charge. This is consistent with the standard model and excludes the exotic scenario at 95\% confidence level. This is the strongest exclusion of the exotic charge scenario and the first to use soft leptons for this purpose.