12 resultados para GRAVITATIONAL LENSING: WEAK
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This work considers the reconstruction of strong gravitational lenses from their observed effects on the light distribution of background sources. After reviewing the formalism of gravitational lensing and the most common and relevant lens models, new analytical results on the elliptical power law lens are presented, including new expressions for the deflection, potential, shear and magnification, which naturally lead to a fast numerical scheme for practical calculation. The main part of the thesis investigates lens reconstruction with extended sources by means of the forward reconstruction method, in which the lenses and sources are given by parametric models. The numerical realities of the problem make it necessary to find targeted optimisations for the forward method, in order to make it feasible for general applications to modern, high resolution images. The result of these optimisations is presented in the \textsc{Lensed} algorithm. Subsequently, a number of tests for general forward reconstruction methods are created to decouple the influence of sourced from lens reconstructions, in order to objectively demonstrate the constraining power of the reconstruction. The final chapters on lens reconstruction contain two sample applications of the forward method. One is the analysis of images from a strong lensing survey. Such surveys today contain $\sim 100$ strong lenses, and much larger sample sizes are expected in the future, making it necessary to quickly and reliably analyse catalogues of lenses with a fixed model. The second application deals with the opposite situation of a single observation that is to be confronted with different lens models, where the forward method allows for natural model-building. This is demonstrated using an example reconstruction of the ``Cosmic Horseshoe''. An appendix presents an independent work on the use of weak gravitational lensing to investigate theories of modified gravity which exhibit screening in the non-linear regime of structure formation.
Resumo:
21 cm cosmology opens an observational window to previously unexplored cosmological epochs such as the Epoch of Reionization (EoR), the Cosmic Dawn and the Dark Ages using powerful radio interferometers such as the planned Square Kilometer Array (SKA). Among all the other applications which can potentially improve the understanding of standard cosmology, we study the promising opportunity given by measuring the weak gravitational lensing sourced by 21 cm radiation. We performed this study in two different cosmological epochs, at a typical EoR redshift and successively at a post-EoR redshift. We will show how the lensing signal can be reconstructed using a three dimensional optimal quadratic lensing estimator in Fourier space, using single frequency band or combining multiple frequency band measurements. To this purpose, we implemented a simulation pipeline capable of dealing with issues that can not be treated analytically. Considering the current SKA plans, we studied the performance of the quadratic estimator at typical EoR redshifts, for different survey strategies and comparing two thermal noise models for the SKA-Low array. The simulation we performed takes into account the beam of the telescope and the discreteness of visibility measurements. We found that an SKA-Low interferometer should obtain high-fidelity images of the underlying mass distribution in its phase 1 only if several bands are stacked together, covering a redshift range that goes from z=7 to z=11.5. The SKA-Low phase 2, modeled in order to improve the sensitivity of the instrument by almost an order of magnitude, should be capable of providing images with good quality even when the signal is detected within a single frequency band. Considering also the serious effect that foregrounds could have on this detections, we discussed the limits of these results and also the possibility provided by these models of measuring an accurate lensing power spectrum.
Resumo:
In this Thesis we have presented our work on the analysis of galaxy clusters through their X-ray emission and the gravitational lensing effect that they induce. Our research work was mainly finalised to verify and possibly explain the observed mismatch between the galaxy cluster mass distributions estimated through two of the most promising techniques, i.e. the X-ray and the gravitational lensing analyses. Moreover, it is an established evidence that combined, multi-wavelength analyses are extremely effective in addressing and explaining the open issues in astronomy: however, in order to follow this approach, it is crucial to test the reliability and the limitations of the individual analysis techniques. In this Thesis we also assessed the impact of some factors that could affect both the X-ray and the strong lensing analyses.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
Seyfert galaxies are the closest active galactic nuclei. As such, we can use
them to test the physical properties of the entire class of objects. To investigate
their general properties, I took advantage of different methods of data analysis. In
particular I used three different samples of objects, that, despite frequent overlaps,
have been chosen to best tackle different topics: the heterogeneous BeppoS AX
sample was thought to be optimized to test the average hard X-ray (E above 10 keV)
properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to
compare the properties of low-luminosity sources to the ones of higher luminosity
and, thus, it was also used to test the emission mechanism models; finally, the
XMM–Newton sample was extracted from the X-CfA sample so as to ensure a
truly unbiased and well defined sample of objects to define the average properties
of Seyfert galaxies.
Taking advantage of the broad-band coverage of the BeppoS AX MECS and
PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (
Resumo:
Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.
Resumo:
In the literature on philosophical practices, despite the crucial role that argumentation plays in these activities, no specific argumentative theories have ever been proposed to assist the figure of the facilitator in conducting philosophical dialogue and to enhance student’s critical thinking skills. The dissertation starts from a cognitive perspective that challenges the classic Cartesian notion of rationality by focusing on limits and biases of human reasoning. An argumentative model (WRAT – Weak Reasoning Argumentative Theory) is then outlined in order to respond to the needs of philosophical dialogue. After justifying the claim that this learning activity, among other inductive methodologies, is the most suitable for critical thinking education, I inquired into the specific goal of ‘arguing’ within this context by means of the tools provided by Speech Act Theory: the speaker’s intention is to construct new knowledge by questioning her own and other’s beliefs. The model proposed has been theorized on this assumption, starting from which the goals, and, in turn, the related norms, have been pinpointed. In order to include all the epistemic attitudes required to accomplish the complex task of arguing in philosophical dialogue, I needed to integrate two opposed cognitive accounts, Dual Process Theory and Evolutionary Approach, that, although they provide incompatible descriptions of reasoning, can be integrated to provide a normative account of argumentation. The model, apart from offering a theoretical contribution to argumentation studies, is designed to be applied to the Italian educational system, in particular to classes in technical and professional high schools belonging to the newly created network Inventio. This initiative is one of the outcomes of the research project by the same name, which also includes an original Syllabus, research seminars, a monitoring action and publications focused on introducing philosophy, in the form of workshop activities, into technical and professional schools.
Resumo:
This Thesis explores two novel and independent cosmological probes, Cosmic Chronometers (CCs) and Gravitational Waves (GWs), to measure the expansion history of the Universe. CCs provide direct and cosmology-independent measurements of the Hubble parameter H(z) up to z∼2. In parallel, GWs provide a direct measurement of the luminosity distance without requiring additional calibration, thus yielding a direct measurement of the Hubble constant H0=H(z=0). This Thesis extends the methodologies of both of these probes to maximize their scientific yield. This is achieved by accounting for the interplay of cosmological and astrophysical parameters to derive them jointly, study possible degeneracies, and eventually minimize potential systematic effects. As a legacy value, this work also provides interesting insights into galaxy evolution and compact binary population properties. The first part presents a detailed study of intermediate-redshift passive galaxies as CCs, with a focus on the selection process and the study of their stellar population properties using specific spectral features. From their differential aging, we derive a new measurement of the Hubble parameter H(z) and thoroughly assess potential systematics. In the second part, we develop a novel methodology and pipeline to obtain joint cosmological and astrophysical population constraints using GWs in combination with galaxy catalogs. This is applied to GW170817 to obtain a measurement of H0. We then perform realistic forecasts to predict joint cosmological and astrophysical constraints from black hole binary mergers for upcoming gravitational wave observatories and galaxy surveys. Using these two probes we provide an independent reconstruction of H(z) with direct measurements of H0 from GWs and H(z) up to z∼2 from CCs and demonstrate that they can be powerful independent probes to unveil the expansion history of the Universe.
Resumo:
In this Thesis, we present a series of works that encompass the fundamental steps of cosmological analyses based on galaxy clusters, spanning from mass calibration to deriving cosmological constraints through counts and clustering. Firstly, we focus on the 3D two-point correlation function (2PCF) of the galaxy cluster sample by Planck Collaboration XXVII (2016). The masses of these clusters are expected to be underestimated, as they are derived from a scaling relation calibrated through X-ray observations. We derived a mass bias which disagrees with simulation predictions, consistent with what derived by Planck Collaboration VI (2020). Furthermore, in this Thesis we analyse the cluster counts and 2PCF, respectively, of the photometric galaxy cluster sample developed by Maturi et al. (2019), based on the third data release of KiDS (KiDS-DR3, de Jong et al. 2017). We derived constraints on fundamental cosmological parameters which are consistent and competitive, in terms of uncertainties, with other state-of-the-art cosmological analyses. Then, we introduce a novel approach to establish galaxy colour-redshift relations for cluster weak-lensing analyses, regardless of the specific photometric bands in use. This method optimises the selection completeness of cluster background galaxies while maintaining a defined purity threshold. Based on the galaxy sample by Bisigello et al. (2020), we calibrated two colour selections, one relying on the ground-based griz bands, and the other including the griz and Euclid YJH bands. In addition, we present the preliminary work on the weak-lensing mass calibration of the clusters detected by Maturi et al. (in prep.) in the fourth data release of KiDS (KiDS-1000, Kuijken et al. 2019). This mass calibration will enable the cosmological analyses based on cluster counts and clustering, from which we expect remarkable improvements in the results compared to those derived in KiDS-DR3.