13 resultados para Earnings and dividend announcements, high frequency data, information asymmetry

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation investigates corporate governance and dividend policy in banking. This topic has recently attracted the attention of numerous scholars all over the world and currently remains one of the most discussed topics in Banking. The core of the dissertation is constituted by three papers. The first paper generalizes the main achievements in the field of relevant study using the approach of meta-analysis. The second paper provides an empirical analysis of the effect of banking corporate governance on dividend payout. Finally, the third paper investigates empirically the effect of government bailout during 2007-2010 on corporate governance and dividend policy of banks. The dissertation uses a new hand-collected data set with information on corporate governance, ownership structure and compensation structure for a sample of listed banks from 15 European countries for the period 2005-2010. The empirical papers employ such econometric approaches as Within-Group model, difference-in-difference technique, and propensity score matching method based on the Nearest Neighbor Matching estimator. The main empirical results may be summarized as follows. First, we provide evidence that CEO power and connection to government are associated with lower dividend payout ratios. This result supports the view that banking regulators are prevalently concerned about the safety of the bank, and powerful bank CEOs can afford to distribute low payout ratios, at the expense of minority shareholders. Next, we find that government bailout during 2007-2010 changes the banks ownership structure and helps to keep lending by bailed bank at the pre-crisis level. Finally, we provide robust evidence for increased control over the banks that receive government money. These findings show the important role of government when overcoming the consequences of the banking crisis, and high quality of governance of public bailouts in European countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents the theory and the conducted activity that lead to the construction of a high voltage high frequency arbitrary waveform voltage generator. The generator has been specifically designed to supply power to a wide range of plasma actuators. The system has been completely designed, manufactured and tested at the Department of Electrical, Electronic and Information Engineering of the University of Bologna. The generator structure is based on the single phase cascaded H-bridge multilevel topology and is comprised of 24 elementary units that are series connected in order to form the typical staircase output voltage waveform of a multilevel converter. The total number of voltage levels that can be produced by the generator is 49. Each level is 600 V making the output peak-to-peak voltage equal to 28.8 kV. The large number of levels provides high resolution with respect to the output voltage having thus the possibility to generate arbitrary waveforms. Maximum frequency of operation is 20 kHz. A study of the relevant literature shows that this is the first time that a cascaded multilevel converter of such dimensions has been constructed. Isolation and control challenges had to be solved for the realization of the system. The biggest problem of the current technology in power supplies for plasma actuators is load matching. Resonant converters are the most used power supplies and are seriously affected by this problem. The manufactured generator completely solves this issue providing consistent voltage output independently of the connected load. This fact is very important when executing tests and during the comparison of the results because all measures should be comparable and not dependent from matching issues. The use of the multilevel converter for power supplying a plasma actuator is a real technological breakthrough that has provided and will continue to provide very significant experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis work we will explore and discuss the properties of the gamma-ray sources included in the first Fermi-LAT catalog of sources above 10 GeV (1FHL), by considering both blazars and the non negligible fraction of still unassociated gamma-ray sources (UGS, 13%). We perform a statistical analysis of a complete sample of hard gamma-ray sources, included in the 1FHL catalog, mostly composed of HSP blazars, and we present new VLBI observations of the faintest members of the sample. The new VLBI data, complemented by an extensive search of the archives for brighter sources, are essential to gather a sample as large as possible for the assessment of the significance of the correlation between radio and very high energy (E>100 GeV) emission bands. After the characterization of the statistical properties of HSP blazars and UGS, we use a complementary approach, by focusing on an intensive multi-frequency observing VLBI and gamma-ray campaign carried out for one of the most remarkable and closest HSP blazar Markarian 421.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, I have investigated the evolution of the high-redshift (z > 3) AGN population by collecting data from some of the major Chandra and XMM-Newton surveys. The final sample (141 sources) is one of the largest selected at z> 3 in the X- rays and it is characterised by a very high redshift completeness (98%). I derived the spectral slopes and obscurations through a spectral anaysis and I assessed the high-z evolution by deriving the luminosity function and the number counts of the sample. The best representation of the AGN evolution is a pure density evolution (PDE) model: the AGN space density is found to decrease by a factor of 10 from z=3 to z=5. I also found that about 50% of AGN are obscured by large column densities (logNH > 23). By comparing these data with those in the Local Universe, I found a positive evolution of the obscured AGN fraction with redshift, especially for luminous (logLx > 44) AGN. I also studied the gas content of z < 1 AGN-hosting galaxies and compared it with that of inactive galaxies. For the first time, I applied to AGN a method to derive the gas mass previously used for inactive galaxies only. AGN are found to live preferentially in gas-rich galaxies. This result on the one hand can help us in understanding the AGN triggering mechanisms, on the other hand explains why AGN are preferentially hosted by star-forming galaxies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis consists of three self-contained papers. In the first paper I analyze the labor supply behavior of Bologna Pizza Delivery Vendors. Recent influential papers analyze labor supply behavior of taxi drivers (Camerer et al., 1997; and Crawford and Meng, 2011) and suggest that reference-dependence preferences have an important influence on drivers labor-supply decisions. Unlike previous papers, I am able to identify an exogenous and transitory change in labor demand. Using high frequency data on orders and rainfall as an exogenous demand shifter, I invariably find that reference-dependent preferences play no role in their labor supply decisions and the behavior of pizza vendors is perfectly consistent with the predictions of the standard model of labor supply. In the second paper, I investigate how the voting behavior of Members of Parliament is influenced by the Members seating nearby. By exploiting the random seating arrangements in the Icelandic Parliament, I show that being seated next to Members of a different party increases the probability of not being aligned with ones own party. Using the exact spatial orientation of the peers, I provide evidence that supports the hypothesis that interaction is the main channel that explain these results. In the third paper, I provide an estimate of the trade flows that there would have been between the UK and Europe if the UK had joined the Euro. As an alternative approach to the standard log-linear gravity equation I employ the synthetic control method. I show that the aggregate trade flows between Britain and Europe would have been 13% higher if the UK had adopted the Euro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The internet and digital technologies revolutionized the economy. Regulating the digital market has become a priority for the European Union. While promoting innovation and development, EU institutions must assure that the digital market maintains a competitive structure. Among the numerous elements characterizing the digital sector, users data are particularly important. Digital services are centered around personal data, the accumulation of which contributed to the centralization of market power in the hands of a few large providers. As a result, data-driven mergers and data-related abuses gained a central role for the purposes of EU antitrust enforcement. In light of these considerations, this work aims at assessing whether EU competition law is well-suited to address data-driven mergers and data-related abuses of dominance. These conducts are of crucial importance to the maintenance of competition in the digital sector, insofar as the accumulation of users data constitutes a fundamental competitive advantage. To begin with, part 1 addresses the specific features of the digital market and their impact on the definition of the relevant market and the assessment of dominance by antitrust authorities. Secondly, part 2 analyzes the EUs case law on data-driven mergers to verify if merger control is well-suited to address these concentrations. Thirdly, part 3 discusses abuses of dominance in the phase of data collection and the legal frameworks applicable to these conducts. Fourthly, part 4 focuses on access to essential datasets and the indirect effects of anticompetitive conducts on rivals ability to access users information. Finally, Part 5 discusses differential pricing practices implemented online and based on personal data. As it will be assessed, the combination of an efficient competition law enforcement and the auspicial adoption of a specific regulation seems to be the best solution to face the challenges raised by data-related dominance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is part of a project promoted by Emilia-Romagna that aims at encouraging research activities in order to support the innovation strategies of the regional economic system through the exploitation of new data sources. To gain this scope, a database containing administrative data is provided by the Municipality of Bologna. This is achieved by linking data from the Register Office of the Municipality and fiscal data coming from the tax returns submitted to the Revenue Agency and released by the Ministry of Economy and Finance for the period 2002-2017. The main purpose of the project is the analysis of the medium term financial and distributional trends of income of the citizens residing in the Municipality of Bologna. Exploiting this innovative source of data allow us to analyse the dynamics of income at municipal level, overcoming the lack of information in official survey-based statistic. We investigate these trends by building inequality indicators and by examining the persistence of in-work poverty. Our results represent an important informative element to improve the effectiveness and equity of welfare policies at the local level, and to guide the distribution of economic and social support and urban redevelopment interventions in different areas of the Municipality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many years, RF and analog integrated circuits have been mainly developed using bipolar and compound semiconductor technologies due to their better performance. In the last years, the advance made in CMOS technology allowed analog and RF circuits to be built with such a technology, but the use of CMOS technology in RF application instead of bipolar technology has brought more issues in terms of noise. The noise cannot be completely eliminated and will therefore ultimately limit the accuracy of measurements and set a lower limit on how small signals can be detected and processed in an electronic circuit. One kind of noise which affects MOS transistors much more than bipolar ones is the low-frequency noise. In MOSFETs, low-frequency noise is mainly of two kinds: flicker or 1/f noise and random telegraph signal noise (RTS). The objective of this thesis is to characterize and to model the low-frequency noise by studying RTS and flicker noise under both constant and switched bias conditions. The effect of different biasing schemes on both RTS and flicker noise in time and frequency domain has been investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory of the 3D multipole probability tomography method (3D GPT) to image source poles, dipoles, quadrupoles and octopoles, of a geophysical vector or scalar field dataset is developed. A geophysical dataset is assumed to be the response of an aggregation of poles, dipoles, quadrupoles and octopoles. These physical sources are used to reconstruct without a priori assumptions the most probable position and shape of the true geophysical buried sources, by determining the location of their centres and critical points of their boundaries, as corners, wedges and vertices. This theory, then, is adapted to the geoelectrical, gravity and self potential methods. A few synthetic examples using simple geometries and three field examples are discussed in order to demonstrate the notably enhanced resolution power of the new approach. At first, the application to a field example related to a dipoledipole geoelectrical survey carried out in the archaeological park of Pompei is presented. The survey was finalised to recognize remains of the ancient Roman urban network including roads, squares and buildings, which were buried under the thick pyroclastic cover fallen during the 79 AD Vesuvius eruption. The revealed anomaly structures are ascribed to wellpreserved remnants of some aligned walls of Roman edifices, buried and partially destroyed by the 79 AD Vesuvius pyroclastic fall. Then, a field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging as accurately as possible the differential mass density structure within the first few km of depth inside the volcanic apparatus. An assemblage of vertical prismatic blocks appears to be the most probable gravity model of the Etna apparatus within the first 5 km of depth below sea level. Finally, an experimental SP dataset collected in the Mt. Somma-Vesuvius volcanic district (Naples, Italy) is elaborated in order to define location and shape of the sources of two SP anomalies of opposite sign detected in the northwestern sector of the surveyed area. The modelled sources are interpreted as the polarization state induced by an intense hydrothermal convective flow mechanism within the volcanic apparatus, from the free surface down to about 3 km of depth b.s.l..

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first part of this thesis has focused on the construction of a twelve-phase asynchronous machine for More Electric Aircraft (MEA) applications. In fact, the aerospace world has found in electrification the way to improve the efficiency, reliability and maintainability of an aircraft. This idea leads to the aircraft a new management and distribution of electrical services. In this way is possible to remove or to reduce the hydraulic, mechanical and pneumatic systems inside the aircraft. The second part of this dissertation is dedicated on the enhancement of the control range of matrix converters (MCs) operating with non-unity input power factor and, at the same time, on the reduction of the switching power losses. The analysis leads to the determination in closed form of a modulation strategy that features a control range, in terms of output voltage and input power factor, that is greater than that of the traditional strategies under the same operating conditions, and a reduction in the switching power losses.