13 resultados para Twitter Financial Market Pearson cross correlation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Resumo:
This thesis focuses on the limits that may prevent an entrepreneur from maximizing her value, and the benefits of diversification in reducing her cost of capital. After reviewing all relevant literature dealing with the differences between traditional corporate finance and entrepreneurial finance, we focus on the biases occurring when traditional finance techniques are applied to the entrepreneurial context. In particular, using the portfolio theory framework, we determine the degree of under-diversification of entrepreneurs. Borrowing the methodology developed by Kerins et al. (2004), we test a model for the cost of capital according to the firms' industry and the entrepreneur's wealth commitment to the firm. This model takes three market inputs (standard deviation of market returns, expected return of the market, and risk-free rate), and two firm-specific inputs (standard deviation of the firm returns and correlation between firm and market returns) as parameters, and returns an appropriate cost of capital as an output. We determine the expected market return and the risk-free rate according to the huge literature on the market risk premium. As for the market return volatility, it is estimated considering a GARCH specification for the market index returns. Furthermore, we assume that the firm-specific inputs can be obtained considering new-listed firms similar in risk to the firm we are evaluating. After we form a database including all the data needed for our analysis, we perform an empirical investigation to understand how much of the firm's total risk depends on market risk, and which explanatory variables can explain it. Our results show that cost of capital declines as the level of entrepreneur's commitment decreases. Therefore, maximizing the value for the entrepreneur depends on the fraction of entrepreneur's wealth invested in the firm and the fraction she sells to outside investors. These results are interesting both for entrepreneurs and policy makers: the former can benefit from an unbiased model for their valuation; the latter can obtain some guidelines to overcome the recent financial market crisis.
Resumo:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
The southern Apennines of Italy have been experienced several destructive earthquakes both in historic and recent times. The present day seismicity, characterized by small-to-moderate magnitude earthquakes, was used like a probe to obatin a deeper knowledge of the fault structures where the largest earthquakes occurred in the past. With the aim to infer a three dimensional seismic image both the problem of data quality and the selection of a reliable and robust tomographic inversion strategy have been faced. The data quality has been obtained to develop optimized procedures for the measurements of P- and S-wave arrival times, through the use of polarization filtering and to the application of a refined re-picking technique based on cross-correlation of waveforms. A technique of iterative tomographic inversion, linearized, damped combined with a strategy of multiscale inversion type has been adopted. The retrieved P-wave velocity model indicates the presence of a strong velocity variation along a direction orthogonal to the Apenninic chain. This variation defines two domains which are characterized by a relatively low and high velocity values. From the comparison between the inferred P-wave velocity model with a portion of a structural section available in literature, the high velocity body was correlated with the Apulia carbonatic platforms whereas the low velocity bodies was associated to the basinal deposits. The deduced Vp/Vs ratio shows that the ratio is lower than 1.8 in the shallower part of the model, while for depths ranging between 5 km and 12 km the ratio increases up to 2.1 in correspondence to the area of higher seismicity. This confirms that areas characterized by higher values are more prone to generate earthquakes as a response to the presence of fluids and higher pore-pressures.
Resumo:
Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.
Resumo:
Le scelte di asset allocation costituiscono un problema ricorrente per ogni investitore. Quest’ultimo è continuamente impegnato a combinare diverse asset class per giungere ad un investimento coerente con le proprie preferenze. L’esigenza di supportare gli asset manager nello svolgimento delle proprie mansioni ha alimentato nel tempo una vasta letteratura che ha proposto numerose strategie e modelli di portfolio construction. Questa tesi tenta di fornire una rassegna di alcuni modelli innovativi di previsione e di alcune strategie nell’ambito dell’asset allocation tattica, per poi valutarne i risvolti pratici. In primis verificheremo la sussistenza di eventuali relazioni tra la dinamica di alcune variabili macroeconomiche ed i mercati finanziari. Lo scopo è quello di individuare un modello econometrico capace di orientare le strategie dei gestori nella costruzione dei propri portafogli di investimento. L’analisi prende in considerazione il mercato americano, durante un periodo caratterizzato da rapide trasformazioni economiche e da un’elevata volatilità dei prezzi azionari. In secondo luogo verrà esaminata la validità delle strategie di trading momentum e contrarian nei mercati futures, in particolare quelli dell’Eurozona, che ben si prestano all’implementazione delle stesse, grazie all’assenza di vincoli sulle operazioni di shorting ed ai ridotti costi di transazione. Dall’indagine emerge che entrambe le anomalie si presentano con carattere di stabilità. I rendimenti anomali permangono anche qualora vengano utilizzati i tradizionali modelli di asset pricing, quali il CAPM, il modello di Fama e French e quello di Carhart. Infine, utilizzando l’approccio EGARCH-M, verranno formulate previsioni sulla volatilità dei rendimenti dei titoli appartenenti al Dow Jones. Quest’ultime saranno poi utilizzate come input per determinare le views da inserire nel modello di Black e Litterman. I risultati ottenuti, evidenziano, per diversi valori dello scalare tau, extra rendimenti medi del new combined vector superiori al vettore degli extra rendimenti di equilibrio di mercato, seppur con livelli più elevati di rischio.
Resumo:
In seguito ad una disamina del materiale presente in letteratura, ci siamo chiesti se i numerosi investimenti pubblicitari, promozionali, di marketing e, per dirla in una parola sola “intangibili”, generassero un aumento del valore dell’impresa nel contesto valutativo oppure se dessero origine esclusivamente ad aumenti di fatturato. L’obiettivo più ambito consiste nel capitalizzare gli investimenti su attività intangibili come la costruzione del marchio, l’utilizzo di brevetti, le operazioni rivolte alla soddisfazione del cliente e tutto quanto si possa definire immateriale. Eppure coesistono nel mare magnum della stessa azienda. Fino a quando non si potrà inserire criteri di valutazione d’azienda delle performance di marketing non ci potrà essere crescita in quanto, le risorse, sono utilizzate senza un criterio di ritorno di investimento.
Resumo:
Il lavoro propone un’analisi critica delle disciplina italiana ed europea della consulenza in materia di investimenti. Si considerano innanzitutto le problematiche generali del rapporto tra cliente e intermediario nella prestazione della consulenza, con particolare riferimento ad asimmetrie informative e conflitti di interesse. Si discute, in particolare, il tradizionale paradigma regolamentare fondato sulla trasparenza: le indicazioni della finanza comportamentale suggeriscono, infatti, un intervento normativo più deciso, volto a caratterizzare in maniera fiduciaria la relazione tra cliente e intermediario. Dopo aver analizzato, alla stregua dei modelli teorici illustrati in precedenza, l’evoluzione storica della disciplina della consulenza nell’ordinamento italiano, si sottolinea il ruolo svolto dall’autorità di vigilanza nella sistematizzazione dell’istituto, nel contempo rilevando, tuttavia, la complessiva insufficienza delle norme vigenti al fine di un’adeguata tutela dell’investitore. Si esamina poi la disciplina introdotta dalla MiFID, con specifica attenzione alle implicazioni sistematiche dell’estensione della nozione di consulenza operata dalle autorità di vigilanza: la nuova configurazione del servizio ha determinato un’intensificazione dei doveri fiduciari imposti agli intermediari, testimoniando un superamento del paradigma di trasparenza e un percorso indirizzato verso un approccio maggiormente interventista sul lato dell’offerta. Le conclusioni sono, peraltro, nel senso di uno sviluppo solo parziale di tale processo, risultando dubbio il valore per il cliente di una consulenza non indipendente e potenzialmente esposta, soprattutto negli intermediari polifunzionali, al conflitto di interessi. Particolare enfasi viene posta sulla necessità di introdurre un’effettiva consulenza indipendente, pur nelle difficoltà che la relativa disciplina incontra, anche in ragione delle caratteristiche specifiche del mercato, nell’ordinamento italiano.
Resumo:
Over the past ten years, the cross-correlation of long-time series of ambient seismic noise (ASN) has been widely adopted to extract the surface-wave part of the Green’s Functions (GF). This stochastic procedure relies on the assumption that ASN wave-field is diffuse and stationary. At frequencies <1Hz, the ASN is mainly composed by surface-waves, whose origin is attributed to the sea-wave climate. Consequently, marked directional properties may be observed, which call for accurate investigation about location and temporal evolution of the ASN-sources before attempting any GF retrieval. Within this general context, this thesis is aimed at a thorough investigation about feasibility and robustness of the noise-based methods toward the imaging of complex geological structures at the local (∼10-50km) scale. The study focused on the analysis of an extended (11 months) seismological data set collected at the Larderello-Travale geothermal field (Italy), an area for which the underground geological structures are well-constrained thanks to decades of geothermal exploration. Focusing on the secondary microseism band (SM;f>0.1Hz), I first investigate the spectral features and the kinematic properties of the noise wavefield using beamforming analysis, highlighting a marked variability with time and frequency. For the 0.1-0.3Hz frequency band and during Spring- Summer-time, the SMs waves propagate with high apparent velocities and from well-defined directions, likely associated with ocean-storms in the south- ern hemisphere. Conversely, at frequencies >0.3Hz the distribution of back- azimuths is more scattered, thus indicating that this frequency-band is the most appropriate for the application of stochastic techniques. For this latter frequency interval, I tested two correlation-based methods, acting in the time (NCF) and frequency (modified-SPAC) domains, respectively yielding esti- mates of the group- and phase-velocity dispersions. Velocity data provided by the two methods are markedly discordant; comparison with independent geological and geophysical constraints suggests that NCF results are more robust and reliable.
Resumo:
The dissertation contains five parts: An introduction, three major chapters, and a short conclusion. The First Chapter starts from a survey and discussion of the studies on corporate law and financial development literature. The commonly used methods in these cross-sectional analyses are biased as legal origins are no longer valid instruments. Hence, the model uncertainty becomes a salient problem. The Bayesian Model Averaging algorithm is applied to test the robustness of empirical results in Djankov et al. (2008). The analysis finds that their constructed legal index is not robustly correlated with most of the various stock market outcome variables. The second Chapter looks into the effects of minority shareholders protection in corporate governance regime on entrepreneurs' ex ante incentives to undertake IPO. Most of the current literature focuses on the beneficial part of minority shareholder protection on valuation, while overlooks its private costs on entrepreneur's control. As a result, the entrepreneur trade-offs the costs of monitoring with the benefits of cheap sources of finance when minority shareholder protection improves. The theoretical predictions are empirically tested using panel data and GMM-sys estimator. The third Chapter investigates the corporate law and corporate governance reform in China. The corporate law in China regards shareholder control as the means to the ends of pursuing the interests of stakeholders, which is inefficient. The Chapter combines the recent development of theories of the firm, i.e., the team production theory and the property rights theory, to solve such problem. The enlightened shareholder value, which emphasizes on the long term valuation of the firm, should be adopted as objectives of listed firms. In addition, a move from the mandatory division of power between shareholder meeting and board meeting to the default regime, is proposed.
Resumo:
Microfinance is an initiative which seeks to address financial inclusion, micro-entrepreneurship, and poverty reduction without over burdening governments. However, the current sector of microfinance is still heavily dependent on the good will of donors. The over-reliance on donations is a feature which threatens the long term sustainability of microfinance. Much has been written about this reliance, but research to date hasn’t empirically examined the effect of regulation as a mediator. This is a critical area of study because regulation directly affects Microfinance Institutions’ (MFI) innovation, and innovation is what shapes the future of microfinance. This thesis considers the role that regulation plays in affecting MFI’s and their ability to innovate in products, services and long-term sustainability via access to capital. Interviews were undertaken with stakeholders in MFI’s, NGO’s, Self-Regulating Bodies, and Regulators in India, Pakistan, and Bangladesh. This thesis discusses findings from interviews in relation to regulatory measures regarding financial self-sustainability of MFI’s. The conclusions of this thesis have implications for policy and inform the microfinance literature.
Resumo:
Market manipulation is an illegal practice that enables a person can profit from practices that artificially raise or lower the prices of an instrument in the financial markets. Its prohibition is based on the 2003 Market Abuse Directive in the EU. The current market manipulation regime was broadly considered as a big success except for enforcement and supervisory inconsistencies in the Member States at the initial. A review of the market manipulation regime began at the end of 2007, which became quickly incorporated into the wider EU crisis-era reform program. A number of weaknesses of current regime have been identified, which include regulatory gaps caused by the development of trading venues and financial products, regulatory gaps concerning cross-border and cross-markets manipulation (particular commodity markets), legal uncertainty as a result of various implementation, and inefficient supervision and enforcement. On 12 June 2014, a new regulatory package of market abuse, Market Abuse Regulation and Directive on criminal sanctions for market abuse, has been adopted. And several changes will be made concerning the EU market manipulation regime. A wider scope of the regime and a new prohibition of attempted market manipulation will ensure the prevention of market manipulation at large. The AMPs will be subject to strict scrutiny of ESMA to reduce divergences in implementation. In order to enhance efficiency of supervision and enforcement, powers of national competent authorities will be strengthened, ESMA is imposed more power to settle disagreement between national regulators, and the administrative and criminal sanctioning regimes are both further harmonized. In addition, the protection of fundamental rights is stressed by the new market manipulation regime, and some measures are provided to guarantee its realization. Further, the success EU market manipulation regime could be of significant reference to China, helping China to refine its immature regime.