33 resultados para Mathematics(all)
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
In Finland, the suicide mortality trend has been decreasing during the last decade and a half, yet suicide was the fourth most common cause of death among both Finnish men and women aged 15 64 years in 2006. However, suicide does not occur equally among population sub-groups. Two notable social factors that position people at different risk of suicide are socioeconomic and employment status: those with low education, employed in manual occupations, having low income and those who are unemployed have been found to have an elevated suicide risk. The purpose of this study was to provide a systematic analysis of these social differences in suicide mortality in Finland. Besides studying socioeconomic trends and differences in suicide according to age and sex, different indicators for socioeconomic status were used simultaneously, taking account of their pathways and mutual associations while also paying attention to confounding and mediatory effects of living arrangements and employment status. Register data obtained from Statistics Finland were used in this study. In some analyses suicides were divided into two groups according to contributory causes of death: the first group consisted of suicide deaths that had alcohol intoxication as one of the contributory causes, and the other group is comprised of all other suicide deaths. Methods included Poisson and Cox regression models. Despite the decrease in suicide mortality trend, social differences still exist. Low occupation-based social class proved to be an important determinant of suicide risk among both men and women, but the strong independent effect of education on alcohol-associated suicide indicates that the roots of these differences are probably established in early adulthood when educational qualifications are obtained and health-behavioural patterns set. High relative suicide mortality among the unemployed during times of economic boom suggests that selective processes may be responsible for some of the employment status differences in suicide. However, long-term unemployment seems to have causal effects on suicide, which, especially among men, partly stem from low income. In conclusion, the results in this study suggest that education, occupation-based social class and employment status have causal effects on suicide risk, but to some extent selection into low education and unemployment are also involved in the explanations for excess suicide mortality among the socially deprived. It is also conceivable that alcohol use is to some extent behind social differences in suicide. In addition to those with low education, manual workers and the unemployed, young people, whose health-related behaviour is still to be adopted, would most probably benefit from suicide prevention programmes.
Resumo:
In this thesis I examine the U.S. foreign policy discussion that followed the war between Russia and Georgia in August 2008. In the politically charged setting that preceded the presidential elections, the subject of the debate was not only Washington's response to the crisis in the Caucasus but, more generally, the direction of U.S. foreign policy after the presidency of George W. Bush. As of November 2010, the reasons for and consequences of the Russia-Georgia war continue to be contested. My thesis demonstrates that there were already a number of different stories about the conflict immediately after the outbreak of hostilities. I want to argue that among these stories one can discern a “neoconservative narrative” that described the war as a confrontation between the East and the West and considered it as a test for Washington’s global leadership. I draw on the theory of securitization, particularly on a framework introduced by Holger Stritzel. Accordingly, I consider statements about the conflict as “threat texts” and analyze these based on the existing discursive context, the performative force of the threat texts and the positional power of the actors presenting them. My thesis suggests that a notion of narrativity can complement Stritzel’s securitization framework and take it further. Threat texts are established as narratives by attaching causal connections, meaning and actorship to the discourse. By focusing on this process I want to shed light on the relationship between the text and the context, capture the time dimension of a speech act articulation and help to explain how some interpretations of the conflict are privileged and others marginalized. I develop the theoretical discussion through an empirical analysis of the neoconservative narrative. Drawing on Stritzel’s framework, I argue that the internal logic of the narrative which was presented as self-evident can be analyzed in its historicity. Asking what was perceived to be at stake in the conflict, how the narrative was formed and what purposes it served also reveals the possibility for alternative explanations. My main source material consists of transcripts of think tank seminars organized in Washington, D.C. in August 2008. In addition, I resort to the foreign policy discussion in the mainstream media.
Resumo:
"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."
Resumo:
We present a measurement of the top quark mass and of the top-antitop pair production cross section using p-pbar data collected with the CDFII detector at the Tevatron Collider at the Fermi National Accelerator Laboratory and corresponding to an integrated luminosity of 2.9 fb-1. We select events with six or more jets satisfying a number of kinematical requirements imposed by means of a neural network algorithm. At least one of these jets must originate from a b quark, as identified by the reconstruction of a secondary vertex inside the jet. The mass measurement is based on a likelihood fit incorporating reconstructed mass distributions representative of signal and background, where the absolute jet energy scale (JES) is measured simultaneously with the top quark mass. The measurement yields a value of 174.8 +- 2.4(stat+JES) ^{+1.2}_{-1.0}(syst) GeV/c^2, where the uncertainty from the absolute jet energy scale is evaluated together with the statistical uncertainty. The procedure measures also the amount of signal from which we derive a cross section, sigma_{ttbar} = 7.2 +- 0.5(stat) +- 1.0 (syst) +- 0.4 (lum) pb, for the measured values of top quark mass and JES.
Resumo:
"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."
Resumo:
We present a measurement of the top quark mass in the all-hadronic channel (\tt $\to$ \bb$q_{1}\bar{q_{2}}q_{3}\bar{q_{4}}$) using 943 pb$^{-1}$ of \ppbar collisions at $\sqrt {s} = 1.96$ TeV collected at the CDF II detector at Fermilab (CDF). We apply the standard model production and decay matrix-element (ME) to $\ttbar$ candidate events. We calculate per-event probability densities according to the ME calculation and construct template models of signal and background. The scale of the jet energy is calibrated using additional templates formed with the invariant mass of pairs of jets. These templates form an overall likelihood function that depends on the top quark mass and on the jet energy scale (JES). We estimate both by maximizing this function. Given 72 observed events, we measure a top quark mass of 171.1 $\pm$ 3.7 (stat.+JES) $\pm$ 2.1 (syst.) GeV/$c^{2}$. The combined uncertainty on the top quark mass is 4.3 GeV/$c^{2}$.
Resumo:
Self-similarity, a concept taken from mathematics, is gradually becoming a keyword in musicology. Although a polysemic term, self-similarity often refers to the multi-scalar feature repetition in a set of relationships, and it is commonly valued as an indication for musical coherence and consistency . This investigation provides a theory of musical meaning formation in the context of intersemiosis, that is, the translation of meaning from one cognitive domain to another cognitive domain (e.g. from mathematics to music, or to speech or graphic forms). From this perspective, the degree of coherence of a musical system relies on a synecdochic intersemiosis: a system of related signs within other comparable and correlated systems. This research analyzes the modalities of such correlations, exploring their general and particular traits, and their operational bounds. Looking forward in this direction, the notion of analogy is used as a rich concept through its two definitions quoted by the Classical literature: proportion and paradigm, enormously valuable in establishing measurement, likeness and affinity criteria. Using quantitative qualitative methods, evidence is presented to justify a parallel study of different modalities of musical self-similarity. For this purpose, original arguments by Benoît B. Mandelbrot are revised, alongside a systematic critique of the literature on the subject. Furthermore, connecting Charles S. Peirce s synechism with Mandelbrot s fractality is one of the main developments of the present study. This study provides elements for explaining Bolognesi s (1983) conjecture, that states that the most primitive, intuitive and basic musical device is self-reference, extending its functions and operations to self-similar surfaces. In this sense, this research suggests that, with various modalities of self-similarity, synecdochic intersemiosis acts as system of systems in coordination with greater or lesser development of structural consistency, and with a greater or lesser contextual dependence.
Resumo:
The aim of the dissertation is to explore the idea of philosophy as a path to happiness in classical Arabic philosophy. The starting point is in comparison of two distinct currents between the 10th and early 11th centuries, Peripatetic philosophy, represented by al-Fārābī and Ibn Sīnā, and Ismaili philosophy represented by al-Kirmānī and the Brethren of Purity. They initially offer two contrasting views about philosophy in that the attitude of the Peripatetics is rationalistic and secular in spirit, whereas for the Ismailis philosophy represents the esoteric truth behind revelation. Still, they converge in their view that the ultimate purpose of philosophy lies in its ability to lead man towards happiness. Moreover, they share a common concept of happiness as a contemplative ideal of human perfection, which refers primarily to an otherworldly state of the soul s ascent to the spiritual world. For both the way to happiness consists of two parts: theory and practice. The practical part manifests itself in the idea of the purification of the rational soul from its bodily attachments in order for it to direct its attention fully to the contemplative life. Hence, there appears an ideal of philosophical life with the goal of relative detachment from the worldly life. The regulations of the religious law in this context appear as the primary means for the soul s purification, but for all but al-Kirmānī they are complemented by auxiliary philosophical practices. The ascent to happiness, however, takes place primarily through the acquisition of theoretical knowledge. The saving knowledge consists primarily of the conception of the hierarchy of physical and metaphysical reality, but all of philosophy forms a curriculum through which the soul gradually ascends towards a spiritual state of being along an order that is inverse to the Neoplatonic emanationist hierarchy of creation. For Ismaili philosophy the ascent takes place from the exoteric religious sciences towards the esoteric philosophical knowledge. For Peripatetic philosophers logic performs the function of an instrument enabling the ascent, mathematics is treated either as propaedeutic to philosophy or as a mediator between physical and metaphysical knowledge, whereas physics and metaphysics provide the core of knowledge necessary for the attainment of happiness.