968 resultados para text vector space model
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
In an effort to meet its obligations under the Kyoto Protocol, in 2005 the European Union introduced a cap-and-trade scheme where mandated installations are allocated permits to emit CO2. Financial markets have developed that allow companies to trade these carbon permits. For the EU to achieve reductions in CO2 emissions at a minimum cost, it is necessary that companies make appropriate investments and policymakers design optimal policies. In an effort to clarify the workings of the carbon market, several recent papers have attempted to statistically model it. However, the European carbon market (EU ETS) has many institutional features that potentially impact on daily carbon prices (and associated nancial futures). As a consequence, the carbon market has properties that are quite different from conventional financial assets traded in mature markets. In this paper, we use dynamic model averaging (DMA) in order to forecast in this newly-developing market. DMA is a recently-developed statistical method which has three advantages over conventional approaches. First, it allows the coefficients on the predictors in a forecasting model to change over time. Second, it allows for the entire fore- casting model to change over time. Third, it surmounts statistical problems which arise from the large number of potential predictors that can explain carbon prices. Our empirical results indicate that there are both important policy and statistical bene ts with our approach. Statistically, we present strong evidence that there is substantial turbulence and change in the EU ETS market, and that DMA can model these features and forecast accurately compared to conventional approaches. From a policy perspective, we discuss the relative and changing role of different price drivers in the EU ETS. Finally, we document the forecast performance of DMA and discuss how this relates to the efficiency and maturity of this market.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting model as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
We estimate the response of stock prices to exogenous monetary policy shocks usinga vector-autoregressive model with time-varying parameters. Our evidence points toprotracted episodes in which, after a a short-run decline, stock prices increase persistently in response to an exogenous tightening of monetary policy. That responseis clearly at odds with the "conventional" view on the effects of monetary policy onbubbles, as well as with the predictions of bubbleless models. We also argue that it isunlikely that such evidence be accounted for by an endogenous response of the equitypremium to the monetary policy shocks.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
Tämän tutkielman tavoitteena on tutkia tekijöitä jotkavaikuttavat lyhyellä ja pitkällä aikavälillä kullan hintaan. Toiseksi tutkielmassa selvitetään mitä eri sijoitusmahdollisuuksia löytyy kultaan sijoitettaessa. Aineistona käytetään kuukausitasoista dataa Yhdysvaltain ja maailman hintaindekseistä, Yhdysvaltain ja maailman inflaatiosta ja inflaation volatiliteetista, kullan beetasta, kullan lainahinnasta, luottoriskistä ja Yhdysvaltojen ja maailman valuuttakurssi indeksistä joulukuulta 1972 elokuulle 2006. Yhteisintegraatio regressiotekniikoita käytettiin muodostamaan malli jonka avullatutkittiin päätekijöitä jotka vaikuttavat kullan hintaan. Kirjallisuutta tutkimalla selvitettiin miten kultaan voidaan sijoittaa. Empiirisettulokset ovat yhteneväisiä edellisten tutkimusten kanssa. Tukea löytyi sille, että kulta on pitkän ajan suoja inflaatiota vastaan ja kulta ja Yhdysvaltojen inflaatio liikkuvat pitkällä aikavälillä yhdessä. Kullan hintaan vaikuttavat kuitenkin lyhyen ajan tekijät pitkän ajan tekijöitä enemmän. Kulta on myös sijoittajalle helppo sijoituskohde, koska se on hyvin saatavilla markkinoilla ja eri instrumentteja on lukuisia.
Resumo:
Astrocyte reactivity is a hallmark of neurodegenerative diseases (ND), but its effects on disease outcomes remain highly debated. Elucidation of the signaling cascades inducing reactivity in astrocytes during ND would help characterize the function of these cells and identify novel molecular targets to modulate disease progression. The Janus kinase/signal transducer and activator of transcription 3 (JAK/STAT3) pathway is associated with reactive astrocytes in models of acute injury, but it is unknown whether this pathway is directly responsible for astrocyte reactivity in progressive pathological conditions such as ND. In this study, we examined whether the JAK/STAT3 pathway promotes astrocyte reactivity in several animal models of ND. The JAK/STAT3 pathway was activated in reactive astrocytes in two transgenic mouse models of Alzheimer's disease and in a mouse and a nonhuman primate lentiviral vector-based model of Huntington's disease (HD). To determine whether this cascade was instrumental for astrocyte reactivity, we used a lentiviral vector that specifically targets astrocytes in vivo to overexpress the endogenous inhibitor of the JAK/STAT3 pathway [suppressor of cytokine signaling 3 (SOCS3)]. SOCS3 significantly inhibited this pathway in astrocytes, prevented astrocyte reactivity, and decreased microglial activation in models of both diseases. Inhibition of the JAK/STAT3 pathway within reactive astrocytes also increased the number of huntingtin aggregates, a neuropathological hallmark of HD, but did not influence neuronal death. Our data demonstrate that the JAK/STAT3 pathway is a common mediator of astrocyte reactivity that is highly conserved between disease states, species, and brain regions. This universal signaling cascade represents a potent target to study the role of reactive astrocytes in ND.
Resumo:
Minimizing the risks of an investment portfolio but not in the favour of expected returns is one of the key interests of an investor. Typically, portfolio diversification is achieved using two main strategies: investing in different classes of assets thought to have little or negative correlations or investing in similar classes of assets in multiple markets through international diversification. This study investigates integration of the Russian financial markets in the time period of January 1, 2003 to December 28, 2007 using daily data. The aim is to test the intra-country and cross-country integration of the Russian stock and bond markets between seven countries. Our test methodology for the short-run dynamics testing is the vector autoregressive model (VAR) and for the long-run cointegration testing we use the Johansen cointegration test which is an extension to VAR. The empirical results of this study show that the Russian stock and bond markets are not integrated in the long-run either at intra-country or cross-country level which means that the markets are relatively segmented. The short-run dynamics are also relatively low. This implies a presence of potential gains from diversification.
Resumo:
Available empirical evidence regarding the degree of symmetry between European economies in the context of Monetary Unification is not conclusive. This paper offers new empirical evidence concerning this issue related to the manufacturing sector. Instead of using a static approach as most empirical studies do, we analyse the dynamic evolution of shock symmetry using a state-space model. The results show a clear reduction of asymmetries in terms of demand shocks between 1975 and 1996, with an increase in terms of supply shocks at the end of the period.
Resumo:
Ilmastonmuutos ja fossiilisten polttoaineiden ehtyminen ovat edesauttaneet uusiutuvien energialähteiden tutkimusta huomattavasti. Lisäksi alati kasvava sähköenergian tarve lisää hajautetun sähköntuotannon ja vaihtoehtoisten energialähteiden kiinnostavuutta. Yleisimpiä hajautetun sähköntuotannon energialähteitä ovat tuulivoima, aurinkovoima ja uutena tulokkaana polttokennot. Polttokennon kytkeminen sähköverkkoon vaatii tehoelektroniikkaa, ja yleensä yksinkertaisessa polttokennosovelluksessa polttokenno kytketään galvaanisesti erottavan yksisuuntaisen DC/DC-hakkurin ja vaihtosuuntaajan kanssa sarjaan. Polttokennon rinnalla voidaan käyttää akkua tasaamaan polttokennon syöttämää jännitettä, jolloin akun ja polttokennon väliin tarvitaan kaksisuuntainen DC/DC-hakkuri, joka pystyy siirtämään energiaa molempiin suuntiin. Tässä diplomityössä on esitetty kaksisuuntaisen DC/DC-hakkurin tilayhtälökeskiarvoistusmenetelmään perustuva malli sekä mallin perusteella toteutettu virtasäätö. Tutkittava hakkuritopologia on kokosilta-tyyppinen boost-hakkuri, ja säätömenetelmä keskiarvovirtasäätö. Työn tuloksena syntyi tilayhtälömalli kaksisuuntaiselle FB boost -hakkurille sekä sen tulokelan virran säätämiseen soveltuva säädin. Säädin toimii normaalitilanteissa hyvin, mutta erikoistilanteissa, kuten hakkurin tulojännitteen äkillisessä muutostilanteessa, vaadittaisiin tehokkaampi säädin, jolla saavutettaisiin nopeampi nousuaika ilman ylitystä ja oskillointia.
Resumo:
Time series analysis has gone through different developmental stages before the current modern approaches. These can broadly categorized as the classical time series analysis and modern time series analysis approach. In the classical one, the basic target of the analysis is to describe the major behaviour of the series without necessarily dealing with the underlying structures. On the contrary, the modern approaches strives to summarize the behaviour of the series going through its underlying structure so that the series can be represented explicitly. In other words, such approach of time series analysis tries to study the series structurally. The components of the series that make up the observation such as the trend, seasonality, regression and disturbance terms are modelled explicitly before putting everything together in to a single state space model which give the natural interpretation of the series. The target of this diploma work is to practically apply the modern approach of time series analysis known as the state space approach, more specifically, the dynamic linear model, to make trend analysis over Ionosonde measurement data. The data is time series of the peak height of F2 layer symbolized by hmF2 which is the height of high electron density. In addition, the work also targets to investigate the connection between solar activity and the peak height of F2 layer. Based on the result found, the peak height of the F2 layer has shown a decrease during the observation period and also shows a nonlinear positive correlation with solar activity.