10 resultados para Event study
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Which event study methods are best in non-U.S. multi-country samples? Nonparametric tests, especially the rank and generalized sign, are better specified and more powerful than common parametric tests, especially in multi-day windows. The generalized sign test is the best statistic but must be applied to buy-and-hold abnormal returns for correct specification. Market-adjusted and market-model methods with local market indexes, without conversion to a common currency, work well. The results are robust to limiting the samples to situations expected to be problematic for test specification or power. Applying the tests that perform best in simulation to merger announcements produces reasonable results.
Resumo:
The dissertation consists of four papers that aim at providing new contributions in the field of macroeconomics, monetary policy and financial stability. The first paper proposes a new Dynamic Stochastic General Equilibrium (DSGE) model with credit frictions and a banking sector to study the pro-cyclicality of credit and the role of different prudential regulatory frameworks in affecting business cycle fluctuations and in restoring macroeconomic and financial stability. The second paper develops a simple DSGE model capable of evaluating the effects of large purchases of treasuries by central banks. This theoretical framework is employed to evaluate the impact on yields and the macroeconomy of large purchases of medium- and long-term government bonds recently implemented in the US and UK. The third paper studies the effects of ECB communications about unconventional monetary policy operations on the perceived sovereign risk of Italy over the last five years. The empirical results are derived from both an event-study analysis and a GARCH model, which uses Italian long-term bond futures to disentangle expected from unexpected policy actions. The fourth paper proposes a DSGE model with an endogenous term structure of interest rates, which is able to replicate the stylized facts regarding the yield curve and the term premium in the US over the period 1987:3-2011:3, without compromising its ability to match macro dynamics.
Resumo:
The papers included in this thesis deal with a few aspects of insurance economics that have seldom been dealt with in the applied literature. In the first paper I apply for the first time the tools of the economics of crime to study the determinants of frauds, using data on Italian provinces. The contributions to the literature are manifold: -The price of insuring has a positive correlation with the propensity to defraud -Social norms constraint fraudulent behavior, but their strength is curtailed in economic downturns -I apply a simple extension of the Random Coefficient model, which allows for the presence of time invariant covariates and asymmetries in the impact of the regressors. The second paper assesses how the evolution of macro prudential regulation of insurance companies has been reflected in their equity price. I employ a standard event study methodology, deriving the definition of the “control” and “treatment” groups from what is implied by the regulatory framework. The main results are: -Markets care about the evolution of the legislation. Their perception has shifted from a first positive assessment of a possible implicit “too big to fail” subsidy to a more negative one related to its cost in terms of stricter capital requirement -The size of this phenomenon is positively related to leverage, size and on the geographical location of the insurance companies The third paper introduces a novel methodology to forecast non-life insurance premiums and profitability as function of macroeconomic variables, using the simultaneous equation framework traditionally employed macroeconometric models and a simple theoretical model of insurance pricing to derive a long term relationship between premiums, claims expenses and short term rates. The model is shown to provide a better forecast of premiums and profitability compared with the single equation specifications commonly used in applied analysis.
Resumo:
This thesis takes two perspectives on political institutions. From the one side, it examines the long-run effects of institutions on cultural values. From the other side, I study strategic communication, and its determinants, of politicians, a pivotal actor inside those institutions. The first chapter provides evidence for the legacy of feudalism - a set of labor coercion and migration restrictions -, on interpersonal distrust. I combining administrative data on the feudal system in the Prussian Empire (1816 – 1849) with the geo-localized survey data from the German Socio-Economic Panel (1980 – 2020). I show that areas with strong historical exposure to feudalism have lower levels of inter-personal trust today, by means of OLS- and mover specifications. The second chapter builds a novel dataset that includes the Twitter handles of 18,000+ politicians and 61+ million tweets from 2008 – 2021 from all levels of government. I find substantial partisan differences in Twitter adoption, Twitter activity and audience engagement. I use established tools to measure ideological polarization to provide evidence that online-polarization follows similar trends to offline-polarization, at comparable magnitude and reaches unprecedented heights in 2018 and 2021. I develop a new tool to demonstrate a marked increase in affective polarization. The third chapter tests whether politicians disseminate distortive messages when exposed to bad news. Specifically, I study the diffusion of misleading communication from pro-gun politicians in the aftermath of mass shootings. I exploit the random timing of mass shootings and analyze half a million tweets between 2010 – 2020 in an event-study design. I develop and apply state-of-the-art text analysis tools to show that pro- gun politicians seek to decrease the salience of the mass shooting through distraction and try to alter voters’ belief formation through misrepresenting the causes of the mass shootings.
Resumo:
In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.
Resumo:
In this work a multidisciplinary study of the December 26th, 2004 Sumatra earthquake has been carried out. We have investigated both the effect of the earthquake on the Earth rotation and the stress field variations associated with the seismic event. In the first part of the work we have quantified the effects of a water mass redistribution associated with the propagation of a tsunami wave on the Earth’s pole path and on the length-of-day (LOD) and applied our modeling results to the tsunami following the 2004 giant Sumatra earthquake. We compared the result of our simulations on the instantaneous rotational axis variations with some preliminary instrumental evidences on the pole path perturbation (which has not been confirmed yet) registered just after the occurrence of the earthquake, which showed a step-like discontinuity that cannot be attributed to the effect of a seismic dislocation. Our results show that the perturbation induced by the tsunami on the instantaneous rotational pole is characterized by a step-like discontinuity, which is compatible with the observations but its magnitude turns out to be almost one hundred times smaller than the detected one. The LOD variation induced by the water mass redistribution turns out to be not significant because the total effect is smaller than current measurements uncertainties. In the second part of this work of thesis we modeled the coseismic and postseismic stress evolution following the Sumatra earthquake. By means of a semi-analytical, viscoelastic, spherical model of global postseismic deformation and a numerical finite-element approach, we performed an analysis of the stress diffusion following the earthquake in the near and far field of the mainshock source. We evaluated the stress changes due to the Sumatra earthquake by projecting the Coulomb stress over the sequence of aftershocks taken from various catalogues in a time window spanning about two years and finally analyzed the spatio-temporal pattern. The analysis performed with the semi-analytical and the finite-element modeling gives a complex picture of the stress diffusion, in the area under study, after the Sumatra earthquake. We believe that the results obtained with the analytical method suffer heavily for the restrictions imposed, on the hypocentral depths of the aftershocks, in order to obtain the convergence of the harmonic series of the stress components. On the contrary we imposed no constraints on the numerical method so we expect that the results obtained give a more realistic description of the stress variations pattern.
Resumo:
The study of the bio-recognition phenomena behind a biological process is nowadays considered a useful tool to deeply understand physiological mechanisms allowing the discovery of novel biological target and the development of new lead candidates. Moreover, understanding this kind of phenomena can be helpful in characterizing absorption, distribution, metabolism, elimination and toxicity properties of a new drug (ADMET parameters). Recent estimations show that about half of all drugs in development fail to make it to the market because of ADMET deficiencies; thus a rapid determination of ADMET parameters in early stages of drug discovery would save money and time, allowing to choose the better compound and to eliminate any losers. The monitoring of drug binding to plasma proteins is becoming essential in the field of drug discovery to characterize the drug distribution in human body. Human serum albumin (HSA) is the most abundant protein in plasma playing a fundamental role in the transport of drugs, metabolites and endogenous factors; so the study of the binding mechanism to HSA has become crucial to the early characterization of the pharmacokinetic profile of new potential leads. Furthermore, most of the distribution experiments carried out in vivo are performed on animals. Hence it is interesting to determine the binding of new compounds to albumins from different species to evaluate the reliability of extrapolating the distribution data obtained in animals to humans. It is clear how the characterization of interactions between proteins and drugs determines a growing need of methodologies to study any specific molecular event. A wide variety of biochemical techniques have been applied to this purpose. High-performance liquid affinity chromatography, circular dichroism and optical biosensor represent three techniques that can be able to elucidate the interaction of a new drug with its target and with others proteins that could interfere with ADMET parameters.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
In the near future, the LHC experiments will continue to be upgraded as the LHC luminosity will increase from the design 1034 to 7.5 × 1034, with the HL-LHC project, to reach 3000 × f b−1 of accumulated statistics. After the end of a period of data collection, CERN will face a long shutdown to improve overall performance by upgrading the experiments and implementing more advanced technologies and infrastructures. In particular, ATLAS will upgrade parts of the detector, the trigger, and the data acquisition system. It will also implement new strategies and algorithms for processing and transferring the data to the final storage. This PhD thesis presents a study of a new pattern recognition algorithm to be used in the trigger system, which is a software designed to provide the information necessary to select physical events from background data. The idea is to use the well-known Hough Transform mathematical formula as an algorithm for detecting particle trajectories. The effectiveness of the algorithm has already been validated in the past, independently of particle physics applications, to detect generic shapes in images. Here, a software emulation tool is proposed for the hardware implementation of the Hough Transform, to reconstruct the tracks in the ATLAS Trigger and Data Acquisition system. Until now, it has never been implemented on electronics in particle physics experiments, and as a hardware implementation it would provide overall latency benefits. A comparison between the simulated data and the physical system was performed on a Xilinx UltraScale+ FPGA device.
Resumo:
In high-energy hadron collisions, the production at parton level of heavy-flavour quarks (charm and bottom) is described by perturbative Quantum Chromo-dynamics (pQCD) calculations, given the hard scale set by the quark masses. However, in hadron-hadron collisions, the predictions of the heavy-flavour hadrons eventually produced entail the knowledge of the parton distribution functions, as well as an accurate description of the hadronisation process. The latter is taken into account via the fragmentation functions measured at e$^+$e$^-$ colliders or in ep collisions, but several observations in LHC Run 1 and Run 2 data challenged this picture. In this dissertation, I studied the charm hadronisation in proton-proton collision at $\sqrt{s}$ = 13 TeV with the ALICE experiment at the LHC, making use of a large statistic data sample collected during LHC Run 2. The production of heavy-flavour in this collision system will be discussed, also describing various hadronisation models implemented in commonly used event generators, which try to reproduce experimental data, taking into account the unexpected results at LHC regarding the enhanced production of charmed baryons. The role of multiple parton interaction (MPI) will also be presented and how it affects the total charm production as a function of multiplicity. The ALICE apparatus will be described before moving to the experimental results, which are related to the measurement of relative production rates of the charm hadrons $\Sigma_c^{0,++}$ and $\Lambda_c^+$, which allow us to study the hadronisation mechanisms of charm quarks and to give constraints to different hadronisation models. Furthermore, the analysis of D mesons ($D^{0}$, $D^{+}$ and $D^{*+}$) as a function of charged-particle multiplicity and spherocity will be shown, investigating the role of multi-parton interactions. This research is relevant per se and for the mission of the ALICE experiment at the LHC, which is devoted to the study of Quark-Gluon Plasma.