14 resultados para ESTIMATOR

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis introduces new processing techniques for computer-aided interpretation of ultrasound images with the purpose of supporting medical diagnostic. In terms of practical application, the goal of this work is the improvement of current prostate biopsy protocols by providing physicians with a visual map overlaid over ultrasound images marking regions potentially affected by disease. As far as analysis techniques are concerned, the main contributions of this work to the state-of-the-art is the introduction of deconvolution as a pre-processing step in the standard ultrasonic tissue characterization procedure to improve the diagnostic significance of ultrasonic features. This thesis also includes some innovations in ultrasound modeling, in particular the employment of a continuous-time autoregressive moving-average (CARMA) model for ultrasound signals, a new maximum-likelihood CARMA estimator based on exponential splines and the definition of CARMA parameters as new ultrasonic features able to capture scatterers concentration. Finally, concerning the clinical usefulness of the developed techniques, the main contribution of this research is showing, through a study based on medical ground truth, that a reduction in the number of sampled cores in standard prostate biopsy is possible, preserving the same diagnostic power of the current clinical protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The term Congenital Nystagmus (Early Onset Nystagmus or Infantile Nystagmus Syndrome) refers to a pathology characterised by an involuntary movement of the eyes, which often seriously reduces a subject’s vision. Congenital Nystagmus (CN) is a specific kind of nystagmus within the wider classification of infantile nystagmus, which can be best recognized and classified by means of a combination of clinical investigations and motility analysis; in some cases, eye movement recording and analysis are indispensable for diagnosis. However, interpretation of eye movement recordings still lacks of complete reliability; hence new analysis techniques and precise identification of concise parameters directly related to visual acuity are necessary to further support physicians’ decisions. To this aim, an index computed from eye movement recordings and related to the visual acuity of a subject is proposed in this thesis. This estimator is based on two parameters: the time spent by a subject effectively viewing a target (foveation time - Tf) and the standard deviation of eye position (SDp). Moreover, since previous studies have shown that visual acuity largely depends on SDp, a data collection pilot study was also conducted with the purpose of specifically identifying eventual slow rhythmic component in the eye position and to characterise in more detail the SDp. The results are presented in this thesis. In addition, some oculomotor system models are reviewed and a new approach to those models, i.e. the recovery of periodic orbits of the oculomotor system in patients with CN, is tested on real patients data. In conclusion, the results obtained within this research consent to completely and reliably characterise the slow rhythmic component sometimes present in eye position recordings of CN subjects and to better classify the different kinds of CN waveforms. Those findings can successfully support the clinicians in therapy planning and treatment outcome evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an extension of the approach provided by Kluppelberg and Kuhn (2009) for inference on second-order structure moments. As in Kluppelberg and Kuhn (2009) we adopt a copula-based approach instead of assuming normal distribution for the variables, thus relaxing the equality in distribution assumption. A new copula-based estimator for structure moments is investigated. The methodology provided by Kluppelberg and Kuhn (2009) is also extended considering the copulas associated with the family of Eyraud-Farlie-Gumbel-Morgenstern distribution functions (Kotz, Balakrishnan, and Johnson, 2000, Equation 44.73). Finally, a comprehensive simulation study and an application to real financial data are performed in order to compare the different approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation investigates corporate governance and dividend policy in banking. This topic has recently attracted the attention of numerous scholars all over the world and currently remains one of the most discussed topics in Banking. The core of the dissertation is constituted by three papers. The first paper generalizes the main achievements in the field of relevant study using the approach of meta-analysis. The second paper provides an empirical analysis of the effect of banking corporate governance on dividend payout. Finally, the third paper investigates empirically the effect of government bailout during 2007-2010 on corporate governance and dividend policy of banks. The dissertation uses a new hand-collected data set with information on corporate governance, ownership structure and compensation structure for a sample of listed banks from 15 European countries for the period 2005-2010. The empirical papers employ such econometric approaches as Within-Group model, difference-in-difference technique, and propensity score matching method based on the Nearest Neighbor Matching estimator. The main empirical results may be summarized as follows. First, we provide evidence that CEO power and connection to government are associated with lower dividend payout ratios. This result supports the view that banking regulators are prevalently concerned about the safety of the bank, and powerful bank CEOs can afford to distribute low payout ratios, at the expense of minority shareholders. Next, we find that government bailout during 2007-2010 changes the banks’ ownership structure and helps to keep lending by bailed bank at the pre-crisis level. Finally, we provide robust evidence for increased control over the banks that receive government money. These findings show the important role of government when overcoming the consequences of the banking crisis, and high quality of governance of public bailouts in European countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation contains five parts: An introduction, three major chapters, and a short conclusion. The First Chapter starts from a survey and discussion of the studies on corporate law and financial development literature. The commonly used methods in these cross-sectional analyses are biased as legal origins are no longer valid instruments. Hence, the model uncertainty becomes a salient problem. The Bayesian Model Averaging algorithm is applied to test the robustness of empirical results in Djankov et al. (2008). The analysis finds that their constructed legal index is not robustly correlated with most of the various stock market outcome variables. The second Chapter looks into the effects of minority shareholders protection in corporate governance regime on entrepreneurs' ex ante incentives to undertake IPO. Most of the current literature focuses on the beneficial part of minority shareholder protection on valuation, while overlooks its private costs on entrepreneur's control. As a result, the entrepreneur trade-offs the costs of monitoring with the benefits of cheap sources of finance when minority shareholder protection improves. The theoretical predictions are empirically tested using panel data and GMM-sys estimator. The third Chapter investigates the corporate law and corporate governance reform in China. The corporate law in China regards shareholder control as the means to the ends of pursuing the interests of stakeholders, which is inefficient. The Chapter combines the recent development of theories of the firm, i.e., the team production theory and the property rights theory, to solve such problem. The enlightened shareholder value, which emphasizes on the long term valuation of the firm, should be adopted as objectives of listed firms. In addition, a move from the mandatory division of power between shareholder meeting and board meeting to the default regime, is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present thesis focuses on the problem of robust output regulation for minimum phase nonlinear systems by means of identification techniques. Given a controlled plant and an exosystem (an autonomous system that generates eventual references or disturbances), the control goal is to design a proper regulator able to process the only measure available, i.e the error/output variable, in order to make it asymptotically vanishing. In this context, such a regulator can be designed following the well known “internal model principle” that states how it is possible to achieve the regulation objective by embedding a replica of the exosystem model in the controller structure. The main problem shows up when the exosystem model is affected by parametric or structural uncertainties, in this case, it is not possible to reproduce the exact behavior of the exogenous system in the regulator and then, it is not possible to achieve the control goal. In this work, the idea is to find a solution to the problem trying to develop a general framework in which coexist both a standard regulator and an estimator able to guarantee (when possible) the best estimate of all uncertainties present in the exosystem in order to give “robustness” to the overall control loop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 3D human movement analysis performed using stereophotogrammetric systems and skin markers, bone pose can only be estimated in an indirect fashion. During a movement, soft tissue deformations make the markers move with respect to the underlying bone generating soft tissue artefact (STA). STA has devastating effects on bone pose estimation and its compensation remains an open question. The aim of this PhD thesis was to contribute to the solution of this crucial issue. Modelling STA using measurable trial-specific variables is a fundamental prerequisite for its removal from marker trajectories. Two STA model architectures are proposed. Initially, a thigh marker-level artefact model is presented. STA was modelled as a linear combination of joint angles involved in the movement. This model was calibrated using ex-vivo and in-vivo STA invasive measures. The considerable number of model parameters led to defining STA approximations. Three definitions were proposed to represent STA as a series of modes: individual marker displacements, marker-cluster geometrical transformations (MCGT), and skin envelope shape variations. Modes were selected using two criteria: one based on modal energy and another on the selection of modes chosen a priori. The MCGT allows to select either rigid or non-rigid STA components. It was also empirically demonstrated that only the rigid component affects joint kinematics, regardless of the non-rigid amplitude. Therefore, a model of thigh and shank STA rigid component at cluster-level was then defined. An acceptable trade-off between STA compensation effectiveness and number of parameters can be obtained, improving joint kinematics accuracy. The obtained results lead to two main potential applications: the proposed models can generate realistic STAs for simulation purposes to compare different skeletal kinematics estimators; and, more importantly, focusing only on the STA rigid component, the model attains a satisfactory STA reconstruction with less parameters, facilitating its incorporation in an pose estimator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

21 cm cosmology opens an observational window to previously unexplored cosmological epochs such as the Epoch of Reionization (EoR), the Cosmic Dawn and the Dark Ages using powerful radio interferometers such as the planned Square Kilometer Array (SKA). Among all the other applications which can potentially improve the understanding of standard cosmology, we study the promising opportunity given by measuring the weak gravitational lensing sourced by 21 cm radiation. We performed this study in two different cosmological epochs, at a typical EoR redshift and successively at a post-EoR redshift. We will show how the lensing signal can be reconstructed using a three dimensional optimal quadratic lensing estimator in Fourier space, using single frequency band or combining multiple frequency band measurements. To this purpose, we implemented a simulation pipeline capable of dealing with issues that can not be treated analytically. Considering the current SKA plans, we studied the performance of the quadratic estimator at typical EoR redshifts, for different survey strategies and comparing two thermal noise models for the SKA-Low array. The simulation we performed takes into account the beam of the telescope and the discreteness of visibility measurements. We found that an SKA-Low interferometer should obtain high-fidelity images of the underlying mass distribution in its phase 1 only if several bands are stacked together, covering a redshift range that goes from z=7 to z=11.5. The SKA-Low phase 2, modeled in order to improve the sensitivity of the instrument by almost an order of magnitude, should be capable of providing images with good quality even when the signal is detected within a single frequency band. Considering also the serious effect that foregrounds could have on this detections, we discussed the limits of these results and also the possibility provided by these models of measuring an accurate lensing power spectrum.