21 resultados para Strong finite model property
em Helda - Digital Repository of University of Helsinki
Resumo:
We solve the Dynamic Ehrenfeucht-Fra\"iss\'e Game on linear orders for both players, yielding a normal form for quantifier-rank equivalence classes of linear orders in first-order logic, infinitary logic, and generalized-infinitary logics with linearly ordered clocks. We show that Scott Sentences can be manipulated quickly, classified into local information, and consistency can be decided effectively in the length of the Scott Sentence. We describe a finite set of linked automata moving continuously on a linear order. Running them on ordinals, we compute the ordinal truth predicate and compute truth in the constructible universe of set-theory. Among the corollaries are a study of semi-models as efficient database of both model-theoretic and formulaic information, and a new proof of the atomicity of the Boolean algebra of sentences consistent with the theory of linear order -- i.e., that the finitely axiomatized theories of linear order are dense.
Resumo:
Malli on logiikassa käytetty abstraktio monille matemaattisille objekteille. Esimerkiksi verkot, ryhmät ja metriset avaruudet ovat malleja. Äärellisten mallien teoria on logiikan osa-alue, jossa tarkastellaan logiikkojen, formaalien kielten, ilmaisuvoimaa malleissa, joiden alkioiden lukumäärä on äärellinen. Rajoittuminen äärellisiin malleihin mahdollistaa tulosten soveltamisen teoreettisessa tietojenkäsittelytieteessä, jonka näkökulmasta logiikan kaavoja voidaan ajatella ohjelmina ja äärellisiä malleja niiden syötteinä. Lokaalisuus tarkoittaa logiikan kyvyttömyyttä erottaa toisistaan malleja, joiden paikalliset piirteet vastaavat toisiaan. Väitöskirjassa tarkastellaan useita lokaalisuuden muotoja ja niiden säilymistä logiikkoja yhdistellessä. Kehitettyjä työkaluja apuna käyttäen osoitetaan, että Gaifman- ja Hanf-lokaalisuudeksi kutsuttujen varianttien välissä on lokaalisuuskäsitteiden hierarkia, jonka eri tasot voidaan erottaa toisistaan kasvavaa dimensiota olevissa hiloissa. Toisaalta osoitetaan, että lokaalisuuskäsitteet eivät eroa toisistaan, kun rajoitutaan tarkastelemaan äärellisiä puita. Järjestysinvariantit logiikat ovat kieliä, joissa on käytössä sisäänrakennettu järjestysrelaatio, mutta sitä on käytettävä siten, etteivät kaavojen ilmaisemat asiat riipu valitusta järjestyksestä. Määritelmää voi motivoida tietojenkäsittelyn näkökulmasta: vaikka ohjelman syötteen tietojen järjestyksellä ei olisi odotetun tuloksen kannalta merkitystä, on syöte tietokoneen muistissa aina jossakin järjestyksessä, jota ohjelma voi laskennassaan hyödyntää. Väitöskirjassa tutkitaan minkälaisia lokaalisuuden muotoja järjestysinvariantit ensimmäisen kertaluvun predikaattilogiikan laajennukset yksipaikkaisilla kvanttoreilla voivat toteuttaa. Tuloksia sovelletaan tarkastelemalla, milloin sisäänrakennettu järjestys lisää logiikan ilmaisuvoimaa äärellisissä puissa.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
Tämä väitöskirja koostuu asuntomarkkinoiden taloustieteellistä analyysia esittelevästä johdantoluvusta ja kolmesta tutkimuksesta, joissa analysoidaan asuntomarkkinoihin vaikuttavia politiikkatoimenpiteitä. Luvussa 2 tutkitaan Suomen kiinteistöverojärjestelmän vaikutusta asuntorakentamiseen. Vuonna 2001 tehtiin uudistus, jonka myötä kunnat voivat verottaa rakentamatonta asuintonttia korkeammalla veroasteella kuin rakennettua tonttia. Maanomistajan rakentamispäätöksen teoreettisen mallin mukaan rakentamattoman tontin korotettu kiinteistöveron pitäisi nopeuttaa rakentamista, mutta toisaalta myös rakentamiseen investoitu rahamäärä saattaa muuttua. Asuintonttien kiinteistöverojen yleinen taso ei vaikuta maanomistajan käyttäytymiseen, sillä tontin verotusarvo ei riipu rakentamispäätöksestä. Vain rakentamattoman ja rakennetun tontin veroasteiden erolla on merkitystä. Empiiriset tulokset ovat sopusoinnussa teorian kanssa. Tulosten mukaan prosenttiyksikön nousu rakentamattoman ja rakennetun tontin veroasteiden erossa lisää omakotialoitusten määrää viidellä prosentilla lyhyellä aikavälillä. Luvussa 3 analysoidaan vuokrasääntelystä vuokralaisille aiheutuvia hyötyjä ja haittoja. Vuokrasäännellyissä asunnoissa asuvat kotitaloudet hyötyvät vuokrasääntelystä alhaisen vuokran muodossa. Heille saattaa kuitenkin koitua myös haittaa siitä, että toiveita vastaavan asunnon löytäminen on vuokrasääntelytilanteessa vaikeaa, koska vapaille asunnoille on suuri määrä ottajia. Vapaarahoitteisen vuokra-asuntokannan vuokrien sääntely purettiin Suomessa asteittain vuosina 1992–1995. Tutkimuksen empiiriset tulokset viittaavat siihen, että vuokrasääntelyn aiheuttamista suurista eroista halutun ja todellisen asuntokulutuksen välillä koituvat hyvinvointitappiot kumosivat merkittävän osan matalien vuokrien hyödyistä vuokralaisille. Luvussa 4 tutkitaan Suomen asumistukijärjestelmän kannustinvaikutuksia. Asumistuen määrää rajoittavat asunnon pinta-alalle ja neliövuokralle asetetut ylärajat. Neliövuokrarajoite voidaan tulkita asumisen laatua rajoittavana tekijänä. Tutkimuksen teoreettisessa osassa osoitetaan, että asumistukijärjestelmä luo vahvat kannustimet muuttaa asuntoihin, joissa pinta-ala- ja laaturajoitteet purevat. Empiiristen tulosten mukaan asumistukeen oikeutetut kotitaloudet eivät näytä reagoivan kannusteisiin. Tukeen oikeutettujen kotitalouksien asumisvalinnat suhteessa pinta-ala ja laaturajoitteisiin vastaavat muiden kotitalouksien valintoja ja asunnonvaihdon mahdollistama potentiaalinen asumistuen lisäys ei nosta muuttotodennäköisyyttä. Muuttamiseen liittyvät kustannukset ja vajavaiset tiedot tukijärjestelmästä saattavat selittää heikkoa reagointia asumistuen luomiin kannustimiin. Toinen mahdollinen selitys on asumistuen vajaakäyttö. Tutkimuksen mukaan vain 70–80 prosenttia asumistukeen oikeutetuista kotitalouksista nostaa tukea. Asumistuen hyödyntämisen todennäköisyys riippuu koulutustasosta, tuen määrästä ja tulo-odotuksista.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.
Resumo:
Sea-surface wind observations of previous generation scatterometers have been successfully assimilated into Numerical Weather Prediction (NWP) models. Impact studies conducted with these assimilation implementations have shown a distinct improvement to model analysis and forecast accuracies. The Advanced Scatterometer (ASCAT), flown on Metop-A, offers an improved sea-surface wind accuracy and better data coverage when compared to the previous generation scatterometers. Five individual case studies are carried out. The effect of including ASCAT data into High Resolution Limited Area Model (HIRLAM) assimilation system (4D-Var) is tested to be neutral-positive for situations with general flow direction from the Atlantic Ocean. For northerly flow regimes the effect is negative. This is later discussed to be caused by problems involving modeling northern flows, and also due to the lack of a suitable verification method. Suggestions and an example of an improved verification method is presented later on. A closer examination of a polar low evolution is also shown. It is found that the ASCAT assimilation scheme improves forecast of the initial evolution of the polar low, but the model advects the strong low pressure centre too fast eastward. Finally, the flaws of the implementation are found small and implementing the ASCAT assimilation scheme into the operational HIRLAM suite is feasible, but longer time period validation is still required.
Resumo:
Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.
Resumo:
Väitöskirjassani tarkastelen informaatiohyödykkeiden ja tekijänoikeuksien taloustiedettä kahdesta eri perspektiivistä. Niistä ensimmäinen kuuluu endogeenisen kasvuteorian alaan. Väitöskirjassani yleistän ”pool of knowledge” -tyyppisen endogeenisen kasvumallin tilanteeseen, jossa patentoitavissa olevalla innovaatiolla on minimikoko, ja jossa uudenlaisen tuotteen patentoinut yritys voi menettää monopolinsa tuotteeseen jäljittelyn johdosta. Mallin kontekstissa voidaan analysoida jäljittelyn ja innovaatioilta vaaditun ”minimikoon” vaikutuksia hyvinvointiin ja talouskasvuun. Kasvun maksimoiva imitaation määrä on mallissa aina nolla, mutta hyvinvoinnin maksimoiva imitaation määrä voi olla positiivinen. Talouskasvun ja hyvinvoinnin maksimoivalla patentoitavissa olevan innovaation ”minimikoolla” voi olla mikä tahansa teoreettista maksimia pienempi arvo. Väitöskirjani kahdessa jälkimmäisessä pääluvussa tarkastelen informaatiohyödykkeiden kaupallista piratismia mikrotaloustieteellisen mallin avulla. Informaatiohyödykkeistä laittomasti tehtyjen kopioiden tuotantokustannukset ovat pienet, ja miltei olemattomat silloin kun niitä levitetään esimerkiksi Internetissä. Koska piraattikopioilla on monta eri tuottajaa, niiden hinnan voitaisiin mikrotaloustieteen teorian perusteella olettaa laskevan melkein nollaan, ja jos näin kävisi, kaupallinen piratismi olisi mahdotonta. Mallissani selitän kaupallisen piratismin olemassaolon olettamalla, että piratismista saatavan rangaistuksen uhka riippuu siitä, kuinka monille kuluttajille piraatti tarjoaa laittomia hyödykkeitä, ja että se siksi vaikuttaa piraattikopioiden markkinoihin mainonnan kustannuksen tavoin. Kaupallisten piraattien kiinteiden kustannusten lisääminen on mallissani aina tekijänoikeuksien haltijan etujen mukaista, mutta ”mainonnan kustannuksen” lisääminen ei välttämättä ole, vaan se saattaa myös alentaa laillisten kopioiden myynnistä saatavia voittoja. Tämä tulos poikkeaa vastaavista aiemmista tuloksista sikäli, että se pätee vaikka tarkasteltuihin informaatiohyödykkeisiin ei liittyisi verkkovaikutuksia. Aiemmin ei-kaupallisen piratismin malleista on usein johdettu tulos, jonka mukaan informaatiohyödykkeen laittomat kopiot voivat kasvattaa laillisten kopioiden myynnistä saatavia voittoja jos laillisten kopioiden arvo niiden käyttäjille riippuu siitä, kuinka monet muut kuluttajat käyttävät samanlaista hyödykettä ja jos piraattikopioiden saatavuus lisää riittävästi laillisten kopioiden arvoa. Väitöskirjan viimeisessä pääluvussa yleistän mallini verkkotoimialoille, ja tutkin yleistämäni mallin avulla sitä, missä tapauksissa vastaava tulos pätee myös kaupalliseen piratismiin.
Resumo:
We report the first measurement of the cross section for Z boson pair production at a hadron collider. This result is based on a data sample corresponding to 1.9 fb-1 of integrated luminosity from ppbar collisions at sqrt{s} = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. In the llll channel, we observe three ZZ candidates with an expected background of 0.096^{+0.092}_{-0.063} events. In the llnunu channel, we use a leading-order calculation of the relative ZZ and WW event probabilities to discriminate between signal and background. In the combination of llll and llnunu channels, we observe an excess of events with a probability of $5.1\times 10^{-6}$ to be due to the expected background. This corresponds to a significance of 4.4 standard deviations. The measured cross section is sigma(ppbar -> ZZ) = 1.4^{+0.7}_{-0.6} (stat.+syst.) pb, consistent with the standard model expectation.
Resumo:
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.