7 resultados para Deep overbite
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.
Resumo:
For its particular position and the complex geological history, the Northern Apennines has been considered as a natural laboratory to apply several kinds of investigations. By the way, it is complicated to joint all the knowledge about the Northern Apennines in a unique picture that explains the structural and geological emplacement that produced it. The main goal of this thesis is to put together all information on the deformation - in the crust and at depth - of this region and to describe a geodynamical model that takes account of it. To do so, we have analyzed the pattern of deformation in the crust and in the mantle. In both cases the deformation has been studied using always information recovered from earthquakes, although using different techniques. In particular the shallower deformation has been studied using seismic moment tensors information. For our purpose we used the methods described in Arvidsson and Ekstrom (1998) that allowing the use in the inversion of surface waves [and not only of the body waves as the Centroid Moment Tensor (Dziewonski et al., 1981) one] allow to determine seismic source parameters for earthquakes with magnitude as small as 4.0. We applied this tool in the Northern Apennines and through this activity we have built up the Italian CMT dataset (Pondrelli et al., 2006) and the pattern of seismic deformation using the Kostrov (1974) method on a regular grid of 0.25 degree cells. We obtained a map of lateral variations of the pattern of seismic deformation on different layers of depth, taking into account the fact that shallow earthquakes (within 15 km of depth) in the region occur everywhere while most of events with a deeper hypocenter (15-40 km) occur only in the outer part of the belt, on the Adriatic side. For the analysis of the deep deformation, i.e. that occurred in the mantle, we used the anisotropy information characterizing the structure below the Northern Apennines. The anisotropy is an earth properties that in the crust is due to the presence of aligned fluid filled cracks or alternating isotropic layers with different elastic properties while in the mantle the most important cause of seismic anisotropy is the lattice preferred orientation (LPO) of the mantle minerals as the olivine. This last is a highly anisotropic mineral and tends to align its fast crystallographic axes (a-axis) parallel to the astenospheric flow as a response to finite strain induced by geodynamic processes. The seismic anisotropy pattern of a region is measured utilizing the shear wave splitting phenomenon (that is the seismological analogue to optical birefringence). Here, to do so, we apply on teleseismic earthquakes recorded on stations located in the study region, the Sileny and Plomerova (1996) approach. The results are analyzed on the basis of their lateral and vertical variations to better define the earth structure beneath Northern Apennines. We find different anisotropic domains, a Tuscany and an Adria one, with a pattern of seismic anisotropy which laterally varies in a similar way respect to the seismic deformation. Moreover, beneath the Adriatic region the distribution of the splitting parameters is so complex to request an appropriate analysis. Therefore we applied on our data the code of Menke and Levin (2003) which allows to look for different models of structures with multilayer anisotropy. We obtained that the structure beneath the Po Plain is probably even more complicated than expected. On the basis of the results obtained for this thesis, added with those from previous works, we suggest that slab roll-back, which created the Apennines and opened the Tyrrhenian Sea, evolved in the north boundary of Northern Apennines in a different way from its southern part. In particular, the trench retreat developed primarily south of our study region, with an eastward roll-back. In the northern portion of the orogen, after a first stage during which the retreat was perpendicular to the trench, it became oblique with respect to the structure.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
In this study new tomographic models of Colombia were calculated. I used the seismicity recorded by the Colombian seismic network during the period 2006-2009. In this time period, the improvement of the seismic network yields more stable hypocentral results with respect to older data set and allows to compute new 3D Vp and Vp/Vs models. The final dataset consists of 10813 P- and 8614 S-arrival times associated to 1405 earthquakes. Tests with synthetic data and resolution analysis indicate that velocity models are well constrained in central, western and southwestern Colombia to a depth of 160 km; the resolution is poor in the northern Colombia and close to Venezuela due to a lack of seismic stations and seismicity. The tomographic models and the relocated seismicity indicate the existence of E-SE subducting Nazca lithosphere beneath central and southern Colombia. The North-South changes in Wadati-Benioff zone, Vp & Vp/Vs pattern and volcanism, show that the downgoing plate is segmented by slab tears E-W directed, suggesting the presence of three sectors. Earthquakes in the northernmost sector represent most of the Colombian seimicity and concentrated on 100-170 km depth interval, beneath the Eastern Cordillera. Here a massive dehydration is inferred, resulting from a delay in the eclogitization of a thickened oceanic crust in a flat-subduction geometry. In this sector a cluster of intermediate-depth seismicity (Bucaramanga Nest) is present beneath the elbow of the Eastern Cordillera, interpreted as the result of massive and highly localized dehydration phenomenon caused by a hyper-hydrous oceanic crust. The central and southern sectors, although different in Vp pattern show, conversely, a continuous, steep and more homogeneous Wadati-Benioff zone with overlying volcanic areas. Here a "normalthickened" oceanic crust is inferred, allowing for a gradual and continuous metamorphic reactions to take place with depth, enabling the fluid migration towards the mantle wedge.
Resumo:
In this thesis, we have presented two deep 1.4 GHz and 345 MHz overlapping surveys of the Lockman Hole field taken with the Westerbork Synthesis Radio Telescope. We extracted a catalogue of ~6000 radio sources from the 1.4 GHz mosaic down to a flux limit of ~55 μJy and a catalogue of 334 radio sources down to a flux limit of ~4 mJy from the inner 7 sq. degree region of the 345 MHz image. The extracted catalogues were used to derive the source number counts at 1.4 GHz and at 345 MHz. The source counts were found to be fully consistent with previous determinations. In particular the 1.4 GHz source counts derived by the present sample provide one of the most statistically robust determinations in the flux range 0.1 < S < 1 mJy. During the commissioning program of the LOFAR telescope, the Lockman Hole field was observed at 58 MHz and 150 MHz. The 150 MHz LOFAR observation is particularly relevant as it allowed us to obtain the first LOFAR flux calibrated high resolution image of a deep field. From this image we extracted a preliminary source catalogue down to a flux limit of ~15 mJy (~10σ), that can be considered complete down to 20‒30 mJy. A spectral index study of the mJy sources in the Lockman Hole region, was performed using the available catalogues ( 1.4 GHz, 345 MHz and 150 MHz) and a deep 610 MHz source catalogue available from the literature (Garn et al. 2008, 2010).
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
An Integrated Transmission-Media Noise Calibration Software For Deep-Space Radio Science Experiments
Resumo:
The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.