6 resultados para Alloy, Model-Based Testing, Z, Test Case Generation

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Breast cancer is the most common cancer in women in the western countries. Approximately two-thirds of breast cancer tumours are hormone dependent, requiring estrogens to grow. Estrogens are formed in the human body via a multistep route starting from cholesterol. The final steps in the biosynthesis include the CYP450 aromatase enzyme, converting the male hormones androgens (preferred substrate androstenedione ASD) into estrogens(estrone E1), and the 17beta-HSD1 enzyme, converting the biologically less active E1 into the active hormone 17beta-hydroxyestradiol E2. E2 is bound to the nuclear estrogen receptors causing a cascade of biochemical reactions leading to cell proliferation in normal tissue, and to tumour growth in cancer tissue. Aromatase and 17beta-HSD1 are expressed in or near the breast tumour, locally providing the tissue with estrogens. One approach in treating hormone dependent breast tumours is to block the local estrogen production by inhibiting these two enzymes. Aromatase inhibitors are already on the market in treating breast cancer, despite the lack of an experimentally solved structure. The structure of 17beta-HSD1, on the other hand, has been solved, but no commercial drugs have emerged from the drug discovery projects reported in the literature. Computer-assisted molecular modelling is an invaluable tool in modern drug design projects. Modelling techniques can be used to generate a model of the target protein and to design novel inhibitors for them even if the target protein structure is unknown. Molecular modelling has applications in predicting the activities of theoretical inhibitors and in finding possible active inhibitors from a compound database. Inhibitor binding at atomic level can also be studied with molecular modelling. To clarify the interactions between the aromatase enzyme and its substrate and inhibitors, we generated a homology model based on a mammalian CYP450 enzyme, rabbit progesterone 21-hydroxylase CYP2C5. The model was carefully validated using molecular dynamics simulations (MDS) with and without the natural substrate ASD. Binding orientation of the inhibitors was based on the hypothesis that the inhibitors coordinate to the heme iron, and were studied using MDS. The inhibitors were dietary phytoestrogens, which have been shown to reduce the risk for breast cancer. To further validate the model, the interactions of a commercial breast cancer drug were studied with MDS and ligand–protein docking. In the case of 17beta-HSD1, a 3D QSAR model was generated on the basis of MDS of an enzyme complex with active inhibitor and ligand–protein docking, employing a compound library synthesised in our laboratory. Furthermore, four pharmacophore hypotheses with and without a bound substrate or an inhibitor were developed and used in screening a commercial database of drug-like compounds. The homology model of aromatase showed stable behaviour in MDS and was capable of explaining most of the results from mutagenesis studies. We were able to identify the active site residues contributing to the inhibitor binding, and explain differences in coordination geometry corresponding to the inhibitory activity. Interactions between the inhibitors and aromatase were in agreement with the mutagenesis studies reported for aromatase. Simulations of 17beta-HSD1 with inhibitors revealed an inhibitor binding mode with hydrogen bond interactions previously not reported, and a hydrophobic pocket capable of accommodating a bulky side chain. Pharmacophore hypothesis generation, followed by virtual screening, was able to identify several compounds that can be used in lead compound generation. The visualisation of the interaction fields from the QSAR model and the pharmacophores provided us with novel ideas for inhibitor development in our drug discovery project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several excited states of Ds and Bs mesons have been discovered in the last six years: BaBar, Cleo and Belle discovered the very narrow states D(s0)*(2317)+- and D(s1)(2460)+- in 2003, and CDF and DO Collaborations reported the observation of two narrow Bs resonances, B(s1)(5830)0 and B*(s2)(5840)0 in 2007. To keep up with experiment, meson excited states should be studied from the theoretical aspect as well. The theory that describes the interaction between quarks and gluons is quantum chromodynamics (QCD). In this thesis the properties of the meson states are studied using the discretized version of the theory - lattice QCD. This allows us to perform QCD calculations from first principles, and "measure" not just energies but also the radial distributions of the states on the lattice. This gives valuable theoretical information on the excited states, as we can extract the energy spectrum of a static-light meson up to D wave states (states with orbital angular momentum L=2). We are thus able to predict where some of the excited meson states should lie. We also pay special attention to the order of the states, to detect possible inverted spin multiplets in the meson spectrum, as predicted by H. Schnitzer in 1978. This inversion is connected to the confining potential of the strong interaction. The lattice simulations can also help us understand the strong interaction better, as the lattice data can be treated as "experimental" data and used in testing potential models. In this thesis an attempt is made to explain the energies and radial distributions in terms of a potential model based on a one-body Dirac equation. The aim is to get more information about the nature of the confining potential, as well as to test how well the one-gluon exchange potential explains the short range part of the interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).