212 resultados para bioekonominen malli
Resumo:
Climate change is the single biggest environmental problem in the world at the moment. Although the effects are still not fully understood and there is considerable amount of uncertainty, many na-tions have decided to mitigate the change. On the societal level, a planner who tries to find an eco-nomically optimal solution to an environmental pollution problem seeks to reduce pollution from the sources where reductions are most cost-effective. This study aims to find out how effective the instruments of the agricultural policy are in the case of climate change mitigation in Finland. The theoretical base of this study is the neoclassical economic theory that is based on the assumption of a rational economic agent who maximizes his own utility. This theoretical base has been widened towards the direction clearly essential to the matter: the theory of environmental eco-nomics. Deeply relevant to this problem and central in the theory of environmental economics are the concepts of externalities and public goods. What are also relevant are the problems of global pollution and non-point-source pollution. Econometric modelling was the method that was applied to this study. The Finnish part of the AGMEMOD-model, covering the whole EU, was used for the estimation of the development of pollution. This model is a seemingly recursive, partially dynamic partial-equilibrium model that was constructed to predict the development of Finnish agricultural production of the most important products. For the study, I personally updated the model and also widened its scope in some relevant matters. Also, I devised a table that can calculate the emissions of greenhouse gases according to the rules set by the IPCC. With the model I investigated five alternative scenarios in comparison to the base-line scenario of Agenda 2000 agricultural policy. The alternative scenarios were: 1) the CAP reform of 2003, 2) free trade on agricultural commodities, 3) technological change, 4) banning the cultivation of organic soils and 5) the combination of the last three scenarios as the maximal achievement in reduction. The maximal achievement in the alternative scenario 5 was 1/3 of the level achieved on the base-line scenario. CAP reform caused only a minor reduction when com-pared to the base-line scenario. Instead, the free trade scenario and the scenario of technological change alone caused a significant reduction. The biggest single reduction was achieved by banning the cultivation of organic land. However, this was also the most questionable scenario to be real-ized, the reasons for this are further elaborated in the paper. The maximal reduction that can be achieved in the Finnish agricultural sector is about 11 % of the emission reduction that is needed to comply with the Kyoto protocol.
Resumo:
There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.
Resumo:
This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.
Resumo:
Tässä Pro Gradu -tutkielmassa oli tarkoitus määrittää ne lämpötilan ääriarvojen maksimi ja minimi arvot, jotka ovat vielä fysikaalisesti mahdollisia Suomen ilmastossa. Työssä käytettiin hyväksi kahta eri yksidimensioista ilmakehämallia, 1D-H634 sekä 1D-RCA3. Ensiksi mainittu pohjaa HIRLAM 6.3.4- malliin. Jälkimmäisessä mallissa HIRLAMin pintaprosessit on korvattu ruotsalaisen Rossby-keskuksen RCA3 -mallin fysiikalla. Tutkimukseen otettiin mukaan kaikki kolme luotausasemaa Suomesta (Jokioinen, Jyväskylä ja Sodankylä). Työ aloitettiin poimimalla Ilmatieteen laitoksen ilmastotietokannasta ne ajankohdat, joina kahden metrin lämpötila on ylittänyt kesällä +30°C ja alittanut talvella -35°C. Seuraavaksi etsittiin näitä ajanjaksoja vastaavat luotaustiedot. Luotauksia tutkimalla pyrittiin selvittämään mitkä tekijät vaikuttivat äärilämpötilojen esiintymiseen. Tämän jälkeen nämä luotaustiedot interpoloitiin vastaamaan mallin 40 vertikaalitasoa. Nämä tiedot syötettiin malleille yhdessä päivämäärän, kellonajan sekä koordinaattien kanssa ja tulokseksi saatiin vuorokauden kahden metrin lämpötilakäyrät. Koska yksidimensioiset mallit eivät ota huomioon lämmön advektiota, laskettiin Euroopan keskipitkien sääennusteiden keskuksen (ECMWF) ERA40-uusanalyysien pohjalta kyseisiä ajanhetkiä vastaavat lämmön advektiot. Lisäksi laskettiin keskimääräiset advektion vuorokausirytmit kesällä (kesä-heinä-elo) ja talvella (tammi-helmi). Suomesta saatujen luotaustietojen pohjalta tehtyjen ajojen kahden metrin lämpötilat eivät kesätilanteessa kyenneet ylittämään Turussa vuonna 1914 mitattua lämpötilaennätystä +35,9°C. Verrattaessa kuitenkin malliajojen tuloksia tehtyihin havaintoihin, voitiin kesätilanteissa todeta mallin antavan jopa 5°C lämpimämpiä arvoja kuin kyseisissä tilanteissa on mitattu. Lopuksi päätettiin tehdä malliajo, jossa luotaus otettiin Tallinnan lentoasemalta elokuulta 1992. Tämän luotaustiedon pohjalta tehdyn ajon tulos (+36,4°C) ylitti Suomessa havaitun lämpötilaennätyksen. Talvitilanteissa 1D-H634-malli ei puolestaan kyennyt saavuttamaan Suomen pakkasennätystä (-51,5°C), joka mitattiin Kittilässä vuonna 1999. Mallitetut pakkaslukemat olivat kuitenkin suurimmassa osassa ajoja kireämpiä kuin mitä kyseisten tilanteiden havainnot kertovat. Käytettäessä 1D-RCA3-mallia päästiin pakkasissa -53,8°C:seen ja pakkaslukemat olivat muutenkin paljon alhaisempia verrattuna 1D-H634- mallin tuloksiin.
Resumo:
Aim of this study is to investigate composition of the crust in Finland using seismic wide-angle velocity models and laboratory measurements on P- and S-wave velocities of different rock types. The velocities adopted from wide-angle velocity models were compared with laboratory velocities of different rock types corrected for the crustal PT conditions in the study area. The wide-angle velocity models indicate that the P-wave velocity does not only increase step-wise at boundaries of major crustal layers, but there is also gradual increase of velocity within the layers. On the other hand, the laboratory measurements of velocities indicate that no single rock type is able to provide the gradual downward increasing trends. Thus, there must be gradual vertical changes in rock composition. The downward increase of velocities indicates that the composition of the crust becomes gradually more mafic with increasing depth. Even though single rock types cannot simulate the wide-angle model velocities, it can be done with a mixture of rock types. There are a large number of rock type mixtures giving the correct P-wave velocities. Therefore, the inverse solution of rock types and their proportions from velocities is a non-unique problem if only P-wave velocities is available. Amount of the possible rock type mixtures can be limitted using S-wave velocities, reflection seismic results and other geological and geophysical results of the study area. Crustal model FINMIX-2 is presented in this study and it suggest that the crustal velocity profiles can be simulated with rock type mixtures, where the upper crust consists of felsic gneisses and granitic-granodioritic rocks with a minor contribution of quartzite, amphibolite and diabase. In the middle crust the amphibolite proportion increases. The lower crust consists of tonalitic gneiss, mafic garnet granulite, hornblendite, pyroxenite and minor mafic eclogite. This composition model is in agreement with deep crustal kimberlite-hosted xenolith data in eastern Finland and reflectivity of the FIRE (Finnish Reflection Experiment). According to FINMIX-2 model the Moho is deeper and the crustal composition is a more mafic than an average global continental model would suggest. Composition models of southern Finland are quite similar than FINMIX-2 model. However, there are minor differencies between the models, which indicates areal differences of composition. Models of northern Finland shows that the crustal thickness is smaller than southern Finland and composition of the upper crust is different. Density profiles calculated from the lithological models suggest that there is practically no density contrast at Moho in areas of the high-velocity lower crust. This implies that crustal thickness in the central Fennoscandian Shield may have been controlled by the densities of the lower crustal and upper mantle rocks.
Resumo:
Breast cancer is the most common cancer in women in Western countries. In the early stages of development most breast cancers are hormone-dependent, and estrogens, especially estradiol, have a pivotal role in their development and progression. One approach to the treatment of hormone-dependent breast cancers is to block the formation of the active estrogens by inhibiting the action of the steroid metabolising enzymes. 17beta-Hydroxysteroid dehydrogenase type 1 (17beta-HSD1) is a key enzyme in the biosynthesis of estradiol, the most potent female sex hormone. The 17beta-HSD1 enzyme catalyses the final step and converts estrone into the biologically active estradiol. Blocking 17beta-HSD1 activity with a specific enzyme inhibitor could provide a means to reduce circulating and tumour estradiol levels and thus promote tumour regression. In recent years 17beta-HSD1 has been recognised as an important drug target. Some inhibitors of 17beta-HSD1 have been reported, however, there are no inhibitors on the market nor have clinical trials been announced. The majority of known 17beta-HSD1 inhibitors are based on steroidal structures, while relatively little has been reported on non-steroidal inhibitors. As compared with 17beta-HSD1 inhibitors based on steroidal structures, non-steroidal compounds could have advantages of synthetic accessibility, drug-likeness, selectivity and non-estrogenicity. This study describes the synthesis of large group of novel 17beta-HSD1 inhibitors based on a non-steroidal thieno[2,3-d]pyrimidin-4(3H)-one core. An efficient synthesis route was developed for the lead compound and subsequently employed in the synthesis of thieno[2,3-d]pyrimidin-4(3H)-one based molecule library. The biological activities and binding of these inhibitors to 17beta-HSD1 and, finally, the quantitative structure activity relationship (QSAR) model are also reported. In this study, several potent and selective 17beta-HSD1 inhibitors without estrogenic activity were identified. This establishment of a novel class of inhibitors is a progressive achievement in 17beta-HSD1 inhibitor development. Furthermore, the 3D-QSAR model, constructed on the basis of this study, offers a powerful tool for future 17beta-HSD1 inhibitor development. As part of the fundamental science underpinning this research, the chemical reactivity of fused (di)cycloalkeno thieno[2,3-d]pyrimidin-4(3H)-ones with electrophilic reagents, i.e. Vilsmeier reagent and dimethylformamide dimethylacetal, was investigated. These findings resulted in a revision of the reaction mechanism of Vilsmeier haloformylation and further contributed to understanding the chemical reactivity of this compound class. This study revealed that the reactivity is dependent upon a stereoelectronic effect arising from different ring conformations.
Resumo:
Breast cancer is the most common cancer in women in the western countries. Approximately two-thirds of breast cancer tumours are hormone dependent, requiring estrogens to grow. Estrogens are formed in the human body via a multistep route starting from cholesterol. The final steps in the biosynthesis include the CYP450 aromatase enzyme, converting the male hormones androgens (preferred substrate androstenedione ASD) into estrogens(estrone E1), and the 17beta-HSD1 enzyme, converting the biologically less active E1 into the active hormone 17beta-hydroxyestradiol E2. E2 is bound to the nuclear estrogen receptors causing a cascade of biochemical reactions leading to cell proliferation in normal tissue, and to tumour growth in cancer tissue. Aromatase and 17beta-HSD1 are expressed in or near the breast tumour, locally providing the tissue with estrogens. One approach in treating hormone dependent breast tumours is to block the local estrogen production by inhibiting these two enzymes. Aromatase inhibitors are already on the market in treating breast cancer, despite the lack of an experimentally solved structure. The structure of 17beta-HSD1, on the other hand, has been solved, but no commercial drugs have emerged from the drug discovery projects reported in the literature. Computer-assisted molecular modelling is an invaluable tool in modern drug design projects. Modelling techniques can be used to generate a model of the target protein and to design novel inhibitors for them even if the target protein structure is unknown. Molecular modelling has applications in predicting the activities of theoretical inhibitors and in finding possible active inhibitors from a compound database. Inhibitor binding at atomic level can also be studied with molecular modelling. To clarify the interactions between the aromatase enzyme and its substrate and inhibitors, we generated a homology model based on a mammalian CYP450 enzyme, rabbit progesterone 21-hydroxylase CYP2C5. The model was carefully validated using molecular dynamics simulations (MDS) with and without the natural substrate ASD. Binding orientation of the inhibitors was based on the hypothesis that the inhibitors coordinate to the heme iron, and were studied using MDS. The inhibitors were dietary phytoestrogens, which have been shown to reduce the risk for breast cancer. To further validate the model, the interactions of a commercial breast cancer drug were studied with MDS and ligand–protein docking. In the case of 17beta-HSD1, a 3D QSAR model was generated on the basis of MDS of an enzyme complex with active inhibitor and ligand–protein docking, employing a compound library synthesised in our laboratory. Furthermore, four pharmacophore hypotheses with and without a bound substrate or an inhibitor were developed and used in screening a commercial database of drug-like compounds. The homology model of aromatase showed stable behaviour in MDS and was capable of explaining most of the results from mutagenesis studies. We were able to identify the active site residues contributing to the inhibitor binding, and explain differences in coordination geometry corresponding to the inhibitory activity. Interactions between the inhibitors and aromatase were in agreement with the mutagenesis studies reported for aromatase. Simulations of 17beta-HSD1 with inhibitors revealed an inhibitor binding mode with hydrogen bond interactions previously not reported, and a hydrophobic pocket capable of accommodating a bulky side chain. Pharmacophore hypothesis generation, followed by virtual screening, was able to identify several compounds that can be used in lead compound generation. The visualisation of the interaction fields from the QSAR model and the pharmacophores provided us with novel ideas for inhibitor development in our drug discovery project.
Resumo:
The importance of intermolecular interactions to chemistry, physics, and biology is difficult to overestimate. Without intermolecular forces, condensed phase matter could not form. The simplest way to categorize different types of intermolecular interactions is to describe them using van der Waals and hydrogen bonded (H-bonded) interactions. In the H-bond, the intermolecular interaction appears between a positively charged hydrogen atom and electronegative fragments and it originates from strong electrostatic interactions. H-bonding is important when considering the properties of condensed phase water and in many biological systems including the structure of DNA and proteins. Vibrational spectroscopy is a useful tool for studying complexes and the solvation of molecules. Vibrational frequency shift has been used to characterize complex formation. In an H-bonded system A∙∙∙H-X (A and X are acceptor and donor species, respectively), the vibrational frequency of the H-X stretching vibration usually decreases from its value in free H-X (red-shift). This frequency shift has been used as evidence for H-bond formation and the magnitude of the shift has been used as an indicator of the H-bonding strength. In contrast to this normal behavior are the blue-shifting H-bonds, in which the H-X vibrational frequency increases upon complex formation. In the last decade, there has been active discussion regarding these blue-shifting H-bonds. Noble-gases have been considered inert due to their limited reactivity with other elements. In the early 1930 s, Pauling predicted the stable noble-gas compounds XeF6 and KrF6. It was not until three decades later Neil Bartlett synthesized the first noble-gas compound, XePtF6, in 1962. A renaissance of noble-gas chemistry began in 1995 with the discovery of noble-gas hydride molecules at the University of Helsinki. The first hydrides were HXeCl, HXeBr, HXeI, HKrCl, and HXeH. These molecules have the general formula of HNgY, where H is a hydrogen atom, Ng is a noble-gas atom (Ar, Kr, or Xe), and Y is an electronegative fragment. At present, this class of molecules comprises 23 members including both inorganic and organic compounds. The first and only argon-containing neutral chemical compound HArF was synthesized in 2000 and its properties have since been investigated in a number of studies. A helium-containing chemical compound, HHeF, was predicted computationally, but its lifetime has been predicted to be severely limited by hydrogen tunneling. Helium and neon are the only elements in the periodic table that do not form neutral, ground state molecules. A noble-gas matrix is a useful medium in which to study unstable and reactive species including ions. A solvated proton forms a centrosymmetric NgHNg+ (Ng = Ar, Kr, and Xe) structure in a noble-gas matrix and this is probably the simplest example of a solvated proton. Interestingly, the hypothetical NeHNe+ cation is isoelectronic with the water-solvated proton H5O2+ (Zundel-ion). In addition to the NgHNg+ cations, the isoelectronic YHY- (Y = halogen atom or pseudohalogen fragment) anions have been studied with the matrix-isolation technique. These species have been known to exist in alkali metal salts (YHY)-M+ (M = alkali metal e.g. K or Na) for more than 80 years. Hydrated HF forms the FHF- structure in aqueous solutions, and these ions participate in several important chemical processes. In this thesis, studies of the intermolecular interactions of HNgY molecules and centrosymmetric ions with various species are presented. The HNgY complexes show unusual spectral features, e.g. large blue-shifts of the H-Ng stretching vibration upon complexation. It is suggested that the blue-shift is a normal effect for these molecules, and that originates from the enhanced (HNg)+Y- ion-pair character upon complexation. It is also found that the HNgY molecules are energetically stabilized in the complexed form, and this effect is computationally demonstrated for the HHeF molecule. The NgHNg+ and YHY- ions also show blue-shifts in their asymmetric stretching vibration upon complexation with nitrogen. Additionally, the matrix site structure and hindered rotation (libration) of the HNgY molecules were studied. The librational motion is a much-discussed solid state phenomenon, and the HNgY molecules embedded in noble-gas matrices are good model systems to study this effect. The formation mechanisms of the HNgY molecules and the decay mechanism of NgHNg+ cations are discussed. A new electron tunneling model for the decay of NgHNg+ absorptions in noble-gas matrices is proposed. Studies of the NgHNg+∙∙∙N2 complexes support this electron tunneling mechanism.
Resumo:
In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.
Resumo:
This thesis studies homogeneous classes of complete metric spaces. Over the past few decades model theory has been extended to cover a variety of nonelementary frameworks. Shelah introduced the abstact elementary classes (AEC) in the 1980s as a common framework for the study of nonelementary classes. Another direction of extension has been the development of model theory for metric structures. This thesis takes a step in the direction of combining these two by introducing an AEC-like setting for studying metric structures. To find balance between generality and the possibility to develop stability theoretic tools, we work in a homogeneous context, thus extending the usual compact approach. The homogeneous context enables the application of stability theoretic tools developed in discrete homogeneous model theory. Using these we prove categoricity transfer theorems for homogeneous metric structures with respect to isometric isomorphisms. We also show how generalized isomorphisms can be added to the class, giving a model theoretic approach to, e.g., Banach space isomorphisms or operator approximations. The novelty is the built-in treatment of these generalized isomorphisms making, e.g., stability up to perturbation the natural stability notion. With respect to these generalized isomorphisms we develop a notion of independence. It behaves well already for structures which are omega-stable up to perturbation and coincides with the one from classical homogeneous model theory over saturated enough models. We also introduce a notion of isolation and prove dominance for it.
Resumo:
Malli on logiikassa käytetty abstraktio monille matemaattisille objekteille. Esimerkiksi verkot, ryhmät ja metriset avaruudet ovat malleja. Äärellisten mallien teoria on logiikan osa-alue, jossa tarkastellaan logiikkojen, formaalien kielten, ilmaisuvoimaa malleissa, joiden alkioiden lukumäärä on äärellinen. Rajoittuminen äärellisiin malleihin mahdollistaa tulosten soveltamisen teoreettisessa tietojenkäsittelytieteessä, jonka näkökulmasta logiikan kaavoja voidaan ajatella ohjelmina ja äärellisiä malleja niiden syötteinä. Lokaalisuus tarkoittaa logiikan kyvyttömyyttä erottaa toisistaan malleja, joiden paikalliset piirteet vastaavat toisiaan. Väitöskirjassa tarkastellaan useita lokaalisuuden muotoja ja niiden säilymistä logiikkoja yhdistellessä. Kehitettyjä työkaluja apuna käyttäen osoitetaan, että Gaifman- ja Hanf-lokaalisuudeksi kutsuttujen varianttien välissä on lokaalisuuskäsitteiden hierarkia, jonka eri tasot voidaan erottaa toisistaan kasvavaa dimensiota olevissa hiloissa. Toisaalta osoitetaan, että lokaalisuuskäsitteet eivät eroa toisistaan, kun rajoitutaan tarkastelemaan äärellisiä puita. Järjestysinvariantit logiikat ovat kieliä, joissa on käytössä sisäänrakennettu järjestysrelaatio, mutta sitä on käytettävä siten, etteivät kaavojen ilmaisemat asiat riipu valitusta järjestyksestä. Määritelmää voi motivoida tietojenkäsittelyn näkökulmasta: vaikka ohjelman syötteen tietojen järjestyksellä ei olisi odotetun tuloksen kannalta merkitystä, on syöte tietokoneen muistissa aina jossakin järjestyksessä, jota ohjelma voi laskennassaan hyödyntää. Väitöskirjassa tutkitaan minkälaisia lokaalisuuden muotoja järjestysinvariantit ensimmäisen kertaluvun predikaattilogiikan laajennukset yksipaikkaisilla kvanttoreilla voivat toteuttaa. Tuloksia sovelletaan tarkastelemalla, milloin sisäänrakennettu järjestys lisää logiikan ilmaisuvoimaa äärellisissä puissa.
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
We consider an obstacle scattering problem for linear Beltrami fields. A vector field is a linear Beltrami field if the curl of the field is a constant times itself. We study the obstacles that are of Neumann type, that is, the normal component of the total field vanishes on the boundary of the obstacle. We prove the unique solvability for the corresponding exterior boundary value problem, in other words, the direct obstacle scattering model. For the inverse obstacle scattering problem, we deduce the formulas that are needed to apply the singular sources method. The numerical examples are computed for the direct scattering problem and for the inverse scattering problem.