877 resultados para GALAXIES, CLUSTERING


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A stately fraction of the Universe volume is dominated by almost empty space. Alongside the luminous filamentary structures that make it up, there are vast and smooth regions that have remained outside the Cosmology spotlight during the past decades: cosmic voids. Although essentially devoid of matter, voids enclose fundamental information about the cosmological framework and have gradually become an effective and competitive cosmological probe. In this Thesis work we present fundamental results about the cosmological exploitation of voids. We focused on the number density of voids as a function of their radius, known as void size function, developing an effective pipeline for its cosmological usage. We proposed a new parametrisation of the most used theoretical void size function to model voids identified in the distribution of biased tracers (i.e. dark matter haloes, galaxies and galaxy clusters), a step of fundamental importance to extend the analysis to real data surveys. We then applied our built methodology to study voids in alternative cosmological scenarios. Firstly we exploited voids with the aim of breaking the degeneracies between cosmological scenarios characterised by modified gravity and the inclusion of massive neutrinos. Secondly we analysed voids in the perspective of the Euclid survey, focusing on the void abundance constraining power on dynamical dark energy models with massive neutrinos. Moreover we explored other void statistics like void profiles and clustering (i.e. the void-galaxy and the void-void correlation), providing cosmological forecasts for the Euclid mission. We finally focused on the probe combination, highlighting the incredible potential of the joint analysis of multiple void statistics and of the combination of the void size function with different cosmological probes. Our results show the fundamental role of the void analysis in constraining the fundamental parameters of the cosmological model and pave the way for future studies on this topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamical models of stellar systems represent a powerful tool to study their internal structure and dynamics, to interpret the observed morphological and kinematical fields, and also to support numerical simulations of their evolution. We present a method especially designed to build axisymmetric Jeans models of galaxies, assumed as stationary and collisionless stellar systems. The aim is the development of a rigorous and flexible modelling procedure of multicomponent galaxies, composed of different stellar and dark matter distributions, and a central supermassive black hole. The stellar components, in particular, are intended to represent different galaxy structures, such as discs, bulges, halos, and can then have different structural (density profile, flattening, mass, scale-length), dynamical (rotation, velocity dispersion anisotropy), and population (age, metallicity, initial mass function, mass-to-light ratio) properties. The theoretical framework supporting the modelling procedure is presented, with the introduction of a suitable nomenclature, and its numerical implementation is discussed, with particular reference to the numerical code JASMINE2, developed for this purpose. We propose an approach for efficiently scaling the contributions in mass, luminosity, and rotational support, of the different matter components, allowing for fast and flexible explorations of the model parameter space. We also offer different methods of the computation of the gravitational potentials associated of the density components, especially convenient for their easier numerical tractability. A few galaxy models are studied, showing internal, and projected, structural and dynamical properties of multicomponent galaxies, with a focus on axisymmetric early-type galaxies with complex kinematical morphologies. The application of galaxy models to the study of initial conditions for hydro-dynamical and $N$-body simulations of galaxy evolution is also addressed, allowing in particular to investigate the large number of interesting combinations of the parameters which determine the structure and dynamics of complex multicomponent stellar systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding how Active Galactic Nuclei (AGN) shape galaxy evolution is a key challenge of modern astronomy. In the framework where black hole (BH) and galaxy growth are linked, AGN feedback must be tackled both at its “causes” (e.g. AGN-driven winds) and its “effects” (alteration of the gas reservoir in AGN hosts). The most informative cosmic time is z~1-3, at the peak of AGN activity and galaxy buildup, the so-called cosmic noon. The aim of this thesis is to provide new insights regarding some key questions that still remain open in this research field: i) What are the properties of AGN-driven sub-pc scale winds at z>1? ii) Are AGN-driven winds effective in influencing the life of galaxies? iii) Do AGN impact directly on star formation (SF) and gas content of their hosts? I first address AGN feedback as “caught in the act” by studying ultra-fast outflows (UFOs), X-ray AGN-driven winds, in gravitationally lensed quasars. I build the first statistically robust sample of high-z AGN, not preselected based on AGN-driven winds. I derive a first estimate of the high-z UFO detection fraction and measure the UFO duty cycle of a single high-z quasar for the first time. I also address the “effects” of AGN feedback on the life of host galaxies. If AGN influence galaxy growth, then they will reasonably impact the molecular gas reservoir first, and SF as a consequence. Through a comparative study of the molecular gas content in cosmic-noon AGN hosts and matched non-active galaxies (i.e., galaxies not hosting an AGN), we find that the host galaxies of more regular AGN (not selected to be the most luminous) are generally similar to non-active galaxies. However, we report on the possibility of a luminosity effect regulating the efficiency by which AGN might impact on galaxy growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radio galaxies (RGs) are extremely relevant in addressing important unknowns concerning the interaction among black hole accretion, radio jets, and the environment. In the classical scheme, their accretion rate and ejection of relativistic jets are directly linked: efficient accretion (HERG) is associated with powerful edge-brightened jets (FRIIs); inefficient accretion (LERG) is associated with weak edge-darkened jets (FRIs). The observation of RGs with an inefficient engine associated with edge-brightened radio emission (FRII-LERGs) broke this scheme. FRII-LERGs constitute a suitable population to explore how accretion and ejection are linked and evaluate the environment's role in shaping jets. To this aim, we performed a multiwavelength study of different RGs catalogs spanning from Jy to mJy flux densities. At first, we investigated the X-ray properties of a sample of 51 FRIIs belonging to the 3CR catalog at z<0.3. Two hypotheses were invoked to explain FRII-LERGs behavior: evolution from classical FRIIs; the role of the environment. Next, we explored the mJy sky by studying the optical-radio properties of hundreds of RGs at z<0.15 (Best & Heckman 2012 sample). FRII-LERGs appear more similar to the old FRI-LERGs than to the young FRII-HERGs. These results point towards an evolutive scenario, however, nuclear time scale changes, star population aging, and kpc-Mpc radio structure modification do not agree. The role of the Mpc environment was then investigated. The Wen et al. 2015 galaxy clusters sample, built exploiting the SDSS survey, allowed us to explore the habitat of 7219 RGs at z<0.3. Most RGs are found to live in outside clusters. For these sources, differences among RG classes are still present. Thus, the environment is not the key parameter, and the possibility of intrinsic differences was reconsidered: we speculated that different black hole properties (spin and magnetic field at its horizon) could determine the observed spread in jet luminosity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this Thesis, we present a series of works that encompass the fundamental steps of cosmological analyses based on galaxy clusters, spanning from mass calibration to deriving cosmological constraints through counts and clustering. Firstly, we focus on the 3D two-point correlation function (2PCF) of the galaxy cluster sample by Planck Collaboration XXVII (2016). The masses of these clusters are expected to be underestimated, as they are derived from a scaling relation calibrated through X-ray observations. We derived a mass bias which disagrees with simulation predictions, consistent with what derived by Planck Collaboration VI (2020). Furthermore, in this Thesis we analyse the cluster counts and 2PCF, respectively, of the photometric galaxy cluster sample developed by Maturi et al. (2019), based on the third data release of KiDS (KiDS-DR3, de Jong et al. 2017). We derived constraints on fundamental cosmological parameters which are consistent and competitive, in terms of uncertainties, with other state-of-the-art cosmological analyses. Then, we introduce a novel approach to establish galaxy colour-redshift relations for cluster weak-lensing analyses, regardless of the specific photometric bands in use. This method optimises the selection completeness of cluster background galaxies while maintaining a defined purity threshold. Based on the galaxy sample by Bisigello et al. (2020), we calibrated two colour selections, one relying on the ground-based griz bands, and the other including the griz and Euclid YJH bands. In addition, we present the preliminary work on the weak-lensing mass calibration of the clusters detected by Maturi et al. (in prep.) in the fourth data release of KiDS (KiDS-1000, Kuijken et al. 2019). This mass calibration will enable the cosmological analyses based on cluster counts and clustering, from which we expect remarkable improvements in the results compared to those derived in KiDS-DR3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The investigations of the large-scale structure of our Universe provide us with extremely powerful tools to shed light on some of the open issues of the currently accepted Standard Cosmological Model. Until recently, constraining the cosmological parameters from cosmic voids was almost infeasible, because the amount of data in void catalogues was not enough to ensure statistically relevant samples. The increasingly wide and deep fields in present and upcoming surveys have made the cosmic voids become promising probes, despite the fact that we are not yet provided with a unique and generally accepted definition for them. In this Thesis we address the two-point statistics of cosmic voids, in the very first attempt to model its features with cosmological purposes. To this end, we implement an improved version of the void power spectrum presented by Chan et al. (2014). We have been able to build up an exceptionally robust method to tackle with the void clustering statistics, by proposing a functional form that is entirely based on first principles. We extract our data from a suite of high-resolution N-body simulations both in the LCDM and alternative modified gravity scenarios. To accurately compare the data to the theory, we calibrate the model by accounting for a free parameter in the void radius that enters the theory of void exclusion. We then constrain the cosmological parameters by means of a Bayesian analysis. As far as the modified gravity effects are limited, our model is a reliable method to constrain the main LCDM parameters. By contrast, it cannot be used to model the void clustering in the presence of stronger modification of gravity. In future works, we will further develop our analysis on the void clustering statistics, by testing our model on large and high-resolution simulations and on real data, also addressing the void clustering in the halo distribution. Finally, we also plan to combine these constraints with those of other cosmological probes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'obiettivo di questo lavoro di tesi consiste nel descrivere sia il processo necessario per la creazione di osservazioni sintetiche di galassie simulate simili alla Via Lattea nella riga di emissione a 21 cm dell'idrogeno neutro (HI), sia il lavoro di analisi fondamentale che serve a confrontare in modo efficace l'output generato con delle osservazioni di galassie reali. Come prima cosa è descritta la teoria quantistica che sta alla base dell'emissione a 21 cm di HI, illustrando l'importanza di tale riga di emissione nell'ambito dell'astronomia e come si possano ottenere informazioni fondamentali sulle sorgenti di questa radiazione a partire dai dati osservativi. Il lavoro poi si focalizza sull'utilizzo del software MARTINI per la creazione di osservazioni sintetiche della linea a 21 cm per una galassia simulata con proprietà simili alla Via Lattea generata utilizzando il modello numerico SMUGGLE. Infine, si passa ad una breve descrizione dell'analisi dei dati sintetici creati, e al loro confronto con dei dati provenienti da osservazioni reali di galassie con proprietà simili, per ottenere una valutazione qualitativa della bontà del modello SMUGGLE impiegato nella simulazione numerica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'esperimento ATLAS, come gli altri esperimenti che operano al Large Hadron Collider, produce Petabytes di dati ogni anno, che devono poi essere archiviati ed elaborati. Inoltre gli esperimenti si sono proposti di rendere accessibili questi dati in tutto il mondo. In risposta a questi bisogni è stato progettato il Worldwide LHC Computing Grid che combina la potenza di calcolo e le capacità di archiviazione di più di 170 siti sparsi in tutto il mondo. Nella maggior parte dei siti del WLCG sono state sviluppate tecnologie per la gestione dello storage, che si occupano anche della gestione delle richieste da parte degli utenti e del trasferimento dei dati. Questi sistemi registrano le proprie attività in logfiles, ricchi di informazioni utili agli operatori per individuare un problema in caso di malfunzionamento del sistema. In previsione di un maggiore flusso di dati nei prossimi anni si sta lavorando per rendere questi siti ancora più affidabili e uno dei possibili modi per farlo è lo sviluppo di un sistema in grado di analizzare i file di log autonomamente e individuare le anomalie che preannunciano un malfunzionamento. Per arrivare a realizzare questo sistema si deve prima individuare il metodo più adatto per l'analisi dei file di log. In questa tesi viene studiato un approccio al problema che utilizza l'intelligenza artificiale per analizzare i logfiles, più nello specifico viene studiato l'approccio che utilizza dell'algoritmo di clustering K-means.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The internal dynamics of elliptical galaxies in clusters depends on many factors, including the environment in which the galaxy is located. In addition to the strong encounters with the other galaxies, we can also consider the gravitational interaction with the ubiquitous Cluster Tidal Field (CTF). As recognized in many studies, one possible way in which CTF affects the dynamics of galaxies inside the cluster is related to the fact that they may start oscillating as “rigid bodies” around their equilibrium positions in the field, with the periods of these oscillations curiously similar to those of stellar orbits in the outer parts of galaxies. Resonances between the two motions are hence expected and this phenomenon could significantly contribute to the formation of the Intracluster Stellar Population (ISP), whose presence is abundantly confirmed by observations. In this thesis work, we propose to study the motion of an elliptical galaxy, modelled as a rigid body, in the CTF, especially when its center of mass traces a quasi-circular orbit in the cluster gravitational potential. This case extends and generalizes the previous models and findings, proceeding towards a much more realistic description of galaxy motion. In addition to this, the presence of a further oscillation, namely that of the entire galaxy along its orbit, will possibly increase the probability of having resonances and, consequently, the rate of ISP production nearly to observed values. Thus, after reviewing the dynamics of a rigid body in a generic force field, we will assess some physically relevant studies and report their main results, discussing their implications with respect to our problem. We will conclude our discussion focusing on the more realistic scenario of an elliptical galaxy whose center of mass moves on a quasi-circular orbit in a spherically symmetric potential. The derivation of the fundamental equations of motion will serve as the basis for future modelling and discussions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I aim to study the evolution with redshift of the gas mass fraction of a sample of 53 sources (from z ∼ 0.5 to z > 5) serendipitously detected in ALMA band 7 as part of the ALMA Large Program to INvestigate C II at Early Times (ALPINE). First, I used SED-fitting software CIGALE, which is able to implement energy balancing between the optical and the far infrared part, to produce a best-fit template of my sources and to have an estimate of some physical properties, such as the star formation rate (SFR), the total infrared luminosity and the total stellar mass. Then, using the tight correlation found by Scoville et al. (2014) between the ISM molecular gas mass and the rest-frame 850 μm luminosity, I used the latter, extrapolating it from the best-fit template using a code that I wrote in Python, as a tracer for the molecular gas. For my sample, I then derived the most important physical properties, such as molecular gas mass, gas mass fractions, specific star formation rate and depletion timescales, which allowed me to better categorize them and find them a place within the evolutionary history of the Universe. I also fitted our sources, via another code I wrote again in Python, with a general modified blackbody (MBB) model taken from the literature (Gilli et al. (2014), D’Amato et al. (2020)) to have a direct method of comparison with similar galaxies. What is evident at the end of the paper is that the methods used to derive the physical quantities of the sources are consistent with each other, and these in turn are in good agreement with what is found in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent integral-field spectroscopic (IFS) survey, the MASSIVE survey (Ma et al. 2014), observed the 116 most massive (MK < −25.3 mag, stellar mass M∗ > 10^11.6 M⊙) early-type galaxies (ETGs) within 108 Mpc, out to radii as large as 40 kpc, that correspond to ∼ 2 − 3 effective radii (Re). One of the major findings of the MASSIVE survey is that the galaxy sample is split nearly equally among three groups showing three different velocity dispersion profiles σ(R) outer of a radius ∼ 5 kpc (falling, flat and rising with radius). The purpose of this thesis is to model the kinematic profiles of six ETGs included in the MASSIVE survey and representative of the three observed σ(R) shapes, with the aim of investigating their dynamical structure. Models for the chosen galaxies are built using the numerical code JASMINE (Posacki, Pellegrini, and Ciotti 2013). The code produces models of axisymmetric galaxies, based on the solution of the Jeans equations for a multicomponent gravitational potential (supermassive black hole, stars and dark matter halo). With the aim of having a good agreement between the kinematics obtained from the Jeans equations, and the observed σ and rotation velocity V of MASSIVE (Veale et al. 2016, 2018), I derived constraints on the dark matter distribution and orbital anisotropy. This work suggests a trend of the dark matter amount and distribution with the shape of the velocity dispersion profiles in the outer regions: the models of galaxies with flat or rising velocity dispersion profiles show higher dark matter fractions fDM both within 1 Re and 5 Re. Orbital anisotropy alone cannot account for the different observed trends of σ(R) and has a minor effect compared to variations of the mass profile. Galaxies with similar stellar mass M∗ that show different velocity dispersion profiles (from falling to rising) are successfully modelled with a variation of the halo mass Mh.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Garlic is a spice and a medicinal plant; hence, there is an increasing interest in 'developing' new varieties with different culinary properties or with high content of nutraceutical compounds. Phenotypic traits and dominant molecular markers are predominantly used to evaluate the genetic diversity of garlic clones. However, 24 SSR markers (codominant) specific for garlic are available in the literature, fostering germplasm researches. In this study, we genotyped 130 garlic accessions from Brazil and abroad using 17 polymorphic SSR markers to assess the genetic diversity and structure. This is the first attempt to evaluate a large set of accessions maintained by Brazilian institutions. A high level of redundancy was detected in the collection (50 % of the accessions represented eight haplotypes). However, non-redundant accessions presented high genetic diversity. We detected on average five alleles per locus, Shannon index of 1.2, HO of 0.5, and HE of 0.6. A core collection was set with 17 accessions, covering 100 % of the alleles with minimum redundancy. Overall FST and D values indicate a strong genetic structure within accessions. Two major groups identified by both model-based (Bayesian approach) and hierarchical clustering (UPGMA dendrogram) techniques were coherent with the classification of accessions according to maturity time (growth cycle): early-late and midseason accessions. Assessing genetic diversity and structure of garlic collections is the first step towards an efficient management and conservation of accessions in genebanks, as well as to advance future genetic studies and improvement of garlic worldwide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.