942 resultados para Conditional Covariance
Resumo:
Randomness in the source condition other than the heterogeneity in the system parameters can also be a major source of uncertainty in the concentration field. Hence, a more general form of the problem formulation is necessary to consider randomness in both source condition and system parameters. When the source varies with time, the unsteady problem, can be solved using the unit response function. In the case of random system parameters, the response function becomes a random function and depends on the randomness in the system parameters. In the present study, the source is modelled as a random discrete process with either a fixed interval or a random interval (the Poisson process). In this study, an attempt is made to assess the relative effects of various types of source uncertainties on the probabilistic behaviour of the concentration in a porous medium while the system parameters are also modelled as random fields. Analytical expressions of mean and covariance of concentration due to random discrete source are derived in terms of mean and covariance of unit response function. The probabilistic behaviour of the random response function is obtained by using a perturbation-based stochastic finite element method (SFEM), which performs well for mild heterogeneity. The proposed method is applied for analysing both the 1-D as well as the 3-D solute transport problems. The results obtained with SFEM are compared with the Monte Carlo simulation for 1-D problems.
Resumo:
Data assimilation provides an initial atmospheric state, called the analysis, for Numerical Weather Prediction (NWP). This analysis consists of pressure, temperature, wind, and humidity on a three-dimensional NWP model grid. Data assimilation blends meteorological observations with the NWP model in a statistically optimal way. The objective of this thesis is to describe methodological development carried out in order to allow data assimilation of ground-based measurements of the Global Positioning System (GPS) into the High Resolution Limited Area Model (HIRLAM) NWP system. Geodetic processing produces observations of tropospheric delay. These observations can be processed either for vertical columns at each GPS receiver station, or for the individual propagation paths of the microwave signals. These alternative processing methods result in Zenith Total Delay (ZTD) and Slant Delay (SD) observations, respectively. ZTD and SD observations are of use in the analysis of atmospheric humidity. A method is introduced for estimation of the horizontal error covariance of ZTD observations. The method makes use of observation minus model background (OmB) sequences of ZTD and conventional observations. It is demonstrated that the ZTD observation error covariance is relatively large in station separations shorter than 200 km, but non-zero covariances also appear at considerably larger station separations. The relatively low density of radiosonde observing stations limits the ability of the proposed estimation method to resolve the shortest length-scales of error covariance. SD observations are shown to contain a statistically significant signal on the asymmetry of the atmospheric humidity field. However, the asymmetric component of SD is found to be nearly always smaller than the standard deviation of the SD observation error. SD observation modelling is described in detail, and other issues relating to SD data assimilation are also discussed. These include the determination of error statistics, the tuning of observation quality control and allowing the taking into account of local observation error correlation. The experiments made show that the data assimilation system is able to retrieve the asymmetric information content of hypothetical SD observations at a single receiver station. Moreover, the impact of real SD observations on humidity analysis is comparable to that of other observing systems.
Resumo:
There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.
Resumo:
Single-symbol maximum likelihood (ML) decodable distributed orthogonal space-time block codes (DOST- BCs) have been introduced recently for cooperative networks and an upper-bound on the maximal rate of such codes along with code constructions has been presented. In this paper, we introduce a new class of distributed space-time block codes (DSTBCs) called semi-orthogonal precoded distributed single-symbol decodable space-time block codes (Semi-SSD-PDSTBCs) wherein, the source performs preceding on the information symbols before transmitting it to all the relays. A set of necessary and sufficient conditions on the relay matrices for the existence of semi-SSD- PDSTBCs is proved. It is shown that the DOSTBCs are a special case of semi-SSD-PDSTBCs. A subset of semi-SSD-PDSTBCs having diagonal covariance matrix at the destination is studied and an upper bound on the maximal rate of such codes is derived. The bounds obtained are approximately twice larger than that of the DOSTBCs. A systematic construction of Semi- SSD-PDSTBCs is presented when the number of relays K ges 4 and the constructed codes are shown to have higher rates than that of DOSTBCs.
Resumo:
Man-induced climate change has raised the need to predict the future climate and its feedback to vegetation. These are studied with global climate models; to ensure the reliability of these predictions, it is important to have a biosphere description that is based upon the latest scientific knowledge. This work concentrates on the modelling of the CO2 exchange of the boreal coniferous forest, studying also the factors controlling its growing season and how these can be used in modelling. In addition, the modelling of CO2 gas exchange at several scales was studied. A canopy-level CO2 gas exchange model was developed based on the biochemical photosynthesis model. This model was first parameterized using CO2 exchange data obtained by eddy covariance (EC) measurements from a Scots pine forest at Sodankylä. The results were compared with a semi-empirical model that was also parameterized using EC measurements. Both of the models gave satisfactory results. The biochemical canopy-level model was further parameterized at three other coniferous forest sites located in Finland and Sweden. At all the sites, the two most important biochemical model parameters showed seasonal behaviour, i.e., their temperature responses changed according to the season. Modelling results were improved when these changeover dates were related to temperature indices. During summer-time the values of the biochemical model parameters were similar at all the four sites. Different control factors for CO2 gas exchange were studied at the four coniferous forests, including how well these factors can be used to predict the initiation and cessation of the CO2 uptake. Temperature indices, atmospheric CO2 concentration, surface albedo and chlorophyll fluorescence (CF) were all found to be useful and have predictive power. In addition, a detailed simulation study of leaf stomata in order to separate physical and biochemical processes was performed. The simulation study brought to light the relative contribution and importance of the physical transport processes. The results of this work can be used in improving CO2 gas exchange models in boreal coniferous forests. The meteorological and biological variables that represent the seasonal cycle were studied, and a method for incorporating this cycle into a biochemical canopy-level model was introduced.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
Hard Custom, Hard Dance: Social Organisation, (Un)Differentiation and Notions of Power in a Tabiteuean Community, Southern Kiribati is an ethnographic study of a village community. This work analyses social organisation on the island of Tabiteuea in the Micronesian state of Kiribati, examining the intertwining of hierarchical and egalitarian traits, meanwhile bringing a new perspective to scholarly discussions of social differentiation by introducing the concept of undifferentiation to describe non-hierarchical social forms and practices. Particular attention is paid to local ideas concerning symbolic power, abstractly understood as the potency for social reproduction, but also examined in one of its forms; authority understood as the right to speak. The workings of social differentiation and undifferentiation in the village are specifically studied in two contexts connected by local notions of power: the meetinghouse institution (te maneaba) and traditional dancing (te mwaie). This dissertation is based on 11 months of anthropological fieldwork in 1999‒2000 in Kiribati and Fiji, with an emphasis on participant observation and the collection of oral tradition (narratives and songs). The questions are approached through three distinct but interrelated topics: (i) A key narrative of the community ‒ the story of an ancestor without descendants ‒ is presented and discussed, along with other narratives. (ii) The Kiribati meetinghouse institution, te maneaba, is considered in terms of oral tradition as well as present-day practices and customs. (iii) Kiribati dancing (te mwaie) is examined through a discussion of competing dance groups, followed by an extended case study of four dance events. In the course of this work the community of close to four hundred inhabitants is depicted as constructed primarily of clans and households, but also of churches, work co-operatives and dance groups, but also as a significant and valued social unit in itself, and a part of the wider island district. In these partly cross-cutting and overlapping social matrices, people are alternatingly organised by the distinct values and logic of differentiation and undifferentiation. At different levels of social integration and in different modes of social and discursive practice, there are heightened moments of differentiation, followed by active undifferentiation. The central notions concerning power and authority to emerge are, firstly, that in order to be valued and utilised, power needs to be controlled. Secondly, power is not allowed to centralize in the hands of one person or group for any long period of time. Thirdly, out of the permanent reach of people, power/authority is always, on the one hand, left outside the factual community and, on the other, vested in community, the social whole. Several forms of differentiation and undifferentiation emerge, but these appear to be systematically related. Social differentiation building on typically Austronesian complementary differences (such as male:female, elder:younger, autochtonous:allotochtonous) is valued, even if eventually restricted, whereas differentiation based on non-complementary differences (such as monetary wealth or level of education) is generally resisted, and/or is subsumed by the complementary distinctions. The concomitant forms of undifferentiation are likewise hierarchically organised. On the level of the society as a whole, undifferentiation means circumscribing and ultimately withholding social hierarchy. Potential hierarchy is both based on a combination of valued complementary differences between social groups and individuals, but also limited by virtue of the undoing of these differences; for example, in the dissolution of seniority (elder-younger) and gender (male-female) into sameness. Like the suspension of hierarchy, undifferentiation as transformation requires the recognition of pre-existing difference and does not mean devaluing the difference. This form of undifferentiation is ultimately encompassed by the first one, as the processes of the differentiation, whether transformed or not, are always halted. Finally, undifferentiation can mean the prevention of non-complementary differences between social groups or individuals. This form of undifferentiation, like the differentiation it works on, takes place on a lower level of societal ideology, as both the differences and their prevention are always encompassed by the complementary differences and their undoing. It is concluded that Southern Kiribati society be seen as a combination of a severely limited and decentralised hierarchy (differentiation) and of a tightly conditional and contextual (intra-category) equality (undifferentiation), and that it is distinctly characterised by an enduring tension between these contradicting social forms and cultural notions. With reference to the local notion of hardness used to characterise custom on this particular island as well as dance in general, it is argued in this work that in this Tabiteuean community some forms of differentiation are valued though strictly delimited or even undone, whereas other forms of differentiation are a perceived as a threat to community, necessitating pre-emptive imposition of undifferentiation. Power, though sought after and displayed - particularly in dancing - must always remain controlled.
Resumo:
This is an ethnographic study of the lived worlds of the keepers of small shops in a residential neighborhood in Seoul, South Korea. It outlines, discusses, and analyses the categories and conceptualizations of South Korean capitalism at the level of households, neighborhoods, and Korean society. These cultural categories were investigated through the neighborhood shopkeepers practices of work and reciprocal interaction as well as through the shopkeepers articulations of their lived experience. In South Korea, the keepers of small businesses have continued to be a large occupational category despite of societal and economic changes, occupying approximately one fourth of the population in active work force. In spite of that, these people, their livelihoods and their cultural and social worlds have rarely been in the focus of social science inquiry. The ethnographic field research for this study was conducted during a 14-month period between November 1998 and December 1999 and in three subsequent short visits to Korea and to the research neighborhood. The fieldwork was conducted during the aftermath of the Asian currency crisis, colloquially termed at the time as the IMF crisis, which highlighted the social and cultural circumstances of small businesskeeper in a specific way. The livelihoods of small-scale entrepreneurs became even more precarious than before; self-employment became an involuntary choice for many middle-class salaried employees who were laid off; and the cultural categories and concepts of society and economy South Korean capitalism were articulated more sharply than before. This study begins with an overview of the contemporary setting, the Korean society under the socially and economically painful outcomes of the economic crisis, and continues with an overview of relevant literature. After introducing the research area and the informants, I discuss the Korean notion of neighborhood, which incorporates both the notions of culturally valued Koreanness and deficiency in the sense of modernity and development. This study further analyses the ways in which the businesskeepers appropriate and reproduce the Korean ideas of men s and women s gender roles and spheres of work. As the appropriation of children s labor is conditional to intergenerational family trajectories which aim not to reproduce parents occupational status but to gain entry to salaried occupations via educational credentials, the work of a married couple is the most common organization of work in small businesses, to which the Korean ideas of family and kin continuity are not applied. While the lack of generational businesskeeping succession suggests that the proprietors mainly subscribe to the notions of familial status that emanate from the practices of the white-collar middle class, the cases of certain women shopkeepers show that their proprietorship and the ensuing economic standing in the family prompts and invites inversed interpretations and uses of common cultural notions of gender. After discussing and analyzing the concept of money and the cultural categorization of leisure and work, topics that emerged as very significant in the lived world of the shopkeepers, this study charts and analyses the categories of identification which the shopkeepers employ for their cultural and social locations and identities. Particular attention is paid to the idea of ordinary people (seomin), which shopkeepers are commonly considered to be most representative of, and which also sums up the ambivalence of neighborhood shopkeepers as a social category: they are not committed to familial reproduction and continuity of the business but aspire non-entrepreneurial careers for their children, while they occupy a significant position in the elaborations of culturally valued notions and ideologies defining Koreanness such as warmheartedness and sociability.
Resumo:
Interaction between forests and the atmosphere occurs by radiative and turbulent transport. The fluxes of energy and mass between surface and the atmosphere directly influence the properties of the lower atmosphere and in longer time scales the global climate. Boreal forest ecosystems are central in the global climate system, and its responses to human activities, because they are significant sources and sinks of greenhouse gases and of aerosol particles. The aim of the present work was to improve our understanding on the existing interplay between biologically active canopy, microenvironment and turbulent flow and quantify. In specific, the aim was to quantify the contribution of different canopy layers to whole forest fluxes. For this purpose, long-term micrometeorological and ecological measurements made in a Scots pine (Pinus sylvestris) forest at SMEAR II research station in Southern Finland were used. The properties of turbulent flow are strongly modified by the interaction between the canopy elements: momentum is efficiently absorbed in the upper layers of the canopy, mean wind speed and turbulence intensities decrease rapidly towards the forest floor and power spectra is modulated by spectral short-cut . In the relative open forest, diabatic stability above the canopy explained much of the changes in velocity statistics within the canopy except in strongly stable stratification. Large eddies, ranging from tens to hundred meters in size, were responsible for the major fraction of turbulent transport between a forest and the atmosphere. Because of this, the eddy-covariance (EC) method proved to be successful for measuring energy and mass exchange inside a forest canopy with exception of strongly stable conditions. Vertical variations of within canopy microclimate, light attenuation in particular, affect strongly the assimilation and transpiration rates. According to model simulations, assimilation rate decreases with height more rapidly than stomatal conductance (gs) and transpiration and, consequently, the vertical source-sink distributions for carbon dioxide (CO2) and water vapor (H2O) diverge. Upscaling from a shoot scale to canopy scale was found to be sensitive to chosen stomatal control description. The upscaled canopy level CO2 fluxes can vary as much as 15 % and H2O fluxes 30 % even if the gs models are calibrated against same leaf-level dataset. A pine forest has distinct overstory and understory layers, which both contribute significantly to canopy scale fluxes. The forest floor vegetation and soil accounted between 18 and 25 % of evapotranspiration and between 10 and 20 % of sensible heat exchange. Forest floor was also an important deposition surface for aerosol particles; between 10 and 35 % of dry deposition of particles within size range 10 30 nm occurred there. Because of the northern latitudes, seasonal cycle of climatic factors strongly influence the surface fluxes. Besides the seasonal constraints, partitioning of available energy to sensible and latent heat depends, through stomatal control, on the physiological state of the vegetation. In spring, available energy is consumed mainly as sensible heat and latent heat flux peaked about two months later, in July August. On the other hand, annual evapotranspiration remains rather stable over range of environmental conditions and thus any increase of accumulated radiation affects primarily the sensible heat exchange. Finally, autumn temperature had strong effect on ecosystem respiration but its influence on photosynthetic CO2 uptake was restricted by low radiation levels. Therefore, the projected autumn warming in the coming decades will presumably reduce the positive effects of earlier spring recovery in terms of carbon uptake potential of boreal forests.
Resumo:
We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time,recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through a pseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets of measurements involving various load cases, we expedite the speed of thePD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small.
Resumo:
We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through apseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets ofmeasurements involving various load cases, we expedite the speed of the PD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.
Resumo:
The increased availability of high frequency data sets have led to important new insights in understanding of financial markets. The use of high frequency data is interesting and persuasive, since it can reveal new information that cannot be seen in lower data aggregation. This dissertation explores some of the many important issues connected with the use, analysis and application of high frequency data. These include the effects of intraday seasonal, the behaviour of time varying volatility, the information content of various market data, and the issue of inter market linkages utilizing high frequency 5 minute observations from major European and the U.S stock indices, namely DAX30 of Germany, CAC40 of France, SMI of Switzerland, FTSE100 of the UK and SP500 of the U.S. The first essay in the dissertation shows that there are remarkable similarities in the intraday behaviour of conditional volatility across European equity markets. Moreover, the U.S macroeconomic news announcements have significant cross border effect on both, European equity returns and volatilities. The second essay reports substantial intraday return and volatility linkages across European stock indices of the UK and Germany. This relationship appears virtually unchanged by the presence or absence of the U.S stock market. However, the return correlation among the U.K and German markets rises significantly following the U.S stock market opening, which could largely be described as a contemporaneous effect. The third essay sheds light on market microstructure issues in which traders and market makers learn from watching market data, and it is this learning process that leads to price adjustments. This study concludes that trading volume plays an important role in explaining international return and volatility transmissions. The examination concerning asymmetry reveals that the impact of the positive volume changes is larger on foreign stock market volatility than the negative changes. The fourth and the final essay documents number of regularities in the pattern of intraday return volatility, trading volume and bid-ask spreads. This study also reports a contemporaneous and positive relationship between the intraday return volatility, bid ask spread and unexpected trading volume. These results verify the role of trading volume and bid ask quotes as proxies for information arrival in producing contemporaneous and subsequent intraday return volatility. Moreover, asymmetric effect of trading volume on conditional volatility is also confirmed. Overall, this dissertation explores the role of information in explaining the intraday return and volatility dynamics in international stock markets. The process through which the information is incorporated in stock prices is central to all information-based models. The intraday data facilitates the investigation that how information gets incorporated into security prices as a result of the trading behavior of informed and uninformed traders. Thus high frequency data appears critical in enhancing our understanding of intraday behavior of various stock markets’ variables as it has important implications for market participants, regulators and academic researchers.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Financial time series tend to behave in a manner that is not directly drawn from a normal distribution. Asymmetries and nonlinearities are usually seen and these characteristics need to be taken into account. To make forecasts and predictions of future return and risk is rather complicated. The existing models for predicting risk are of help to a certain degree, but the complexity in financial time series data makes it difficult. The introduction of nonlinearities and asymmetries for the purpose of better models and forecasts regarding both mean and variance is supported by the essays in this dissertation. Linear and nonlinear models are consequently introduced in this dissertation. The advantages of nonlinear models are that they can take into account asymmetries. Asymmetric patterns usually mean that large negative returns appear more often than positive returns of the same magnitude. This goes hand in hand with the fact that negative returns are associated with higher risk than in the case where positive returns of the same magnitude are observed. The reason why these models are of high importance lies in the ability to make the best possible estimations and predictions of future returns and for predicting risk.