304 resultados para MOA-malli
Resumo:
Breast cancer is the most common cancer in women in Western countries. In the early stages of development most breast cancers are hormone-dependent, and estrogens, especially estradiol, have a pivotal role in their development and progression. One approach to the treatment of hormone-dependent breast cancers is to block the formation of the active estrogens by inhibiting the action of the steroid metabolising enzymes. 17beta-Hydroxysteroid dehydrogenase type 1 (17beta-HSD1) is a key enzyme in the biosynthesis of estradiol, the most potent female sex hormone. The 17beta-HSD1 enzyme catalyses the final step and converts estrone into the biologically active estradiol. Blocking 17beta-HSD1 activity with a specific enzyme inhibitor could provide a means to reduce circulating and tumour estradiol levels and thus promote tumour regression. In recent years 17beta-HSD1 has been recognised as an important drug target. Some inhibitors of 17beta-HSD1 have been reported, however, there are no inhibitors on the market nor have clinical trials been announced. The majority of known 17beta-HSD1 inhibitors are based on steroidal structures, while relatively little has been reported on non-steroidal inhibitors. As compared with 17beta-HSD1 inhibitors based on steroidal structures, non-steroidal compounds could have advantages of synthetic accessibility, drug-likeness, selectivity and non-estrogenicity. This study describes the synthesis of large group of novel 17beta-HSD1 inhibitors based on a non-steroidal thieno[2,3-d]pyrimidin-4(3H)-one core. An efficient synthesis route was developed for the lead compound and subsequently employed in the synthesis of thieno[2,3-d]pyrimidin-4(3H)-one based molecule library. The biological activities and binding of these inhibitors to 17beta-HSD1 and, finally, the quantitative structure activity relationship (QSAR) model are also reported. In this study, several potent and selective 17beta-HSD1 inhibitors without estrogenic activity were identified. This establishment of a novel class of inhibitors is a progressive achievement in 17beta-HSD1 inhibitor development. Furthermore, the 3D-QSAR model, constructed on the basis of this study, offers a powerful tool for future 17beta-HSD1 inhibitor development. As part of the fundamental science underpinning this research, the chemical reactivity of fused (di)cycloalkeno thieno[2,3-d]pyrimidin-4(3H)-ones with electrophilic reagents, i.e. Vilsmeier reagent and dimethylformamide dimethylacetal, was investigated. These findings resulted in a revision of the reaction mechanism of Vilsmeier haloformylation and further contributed to understanding the chemical reactivity of this compound class. This study revealed that the reactivity is dependent upon a stereoelectronic effect arising from different ring conformations.
Resumo:
Breast cancer is the most common cancer in women in the western countries. Approximately two-thirds of breast cancer tumours are hormone dependent, requiring estrogens to grow. Estrogens are formed in the human body via a multistep route starting from cholesterol. The final steps in the biosynthesis include the CYP450 aromatase enzyme, converting the male hormones androgens (preferred substrate androstenedione ASD) into estrogens(estrone E1), and the 17beta-HSD1 enzyme, converting the biologically less active E1 into the active hormone 17beta-hydroxyestradiol E2. E2 is bound to the nuclear estrogen receptors causing a cascade of biochemical reactions leading to cell proliferation in normal tissue, and to tumour growth in cancer tissue. Aromatase and 17beta-HSD1 are expressed in or near the breast tumour, locally providing the tissue with estrogens. One approach in treating hormone dependent breast tumours is to block the local estrogen production by inhibiting these two enzymes. Aromatase inhibitors are already on the market in treating breast cancer, despite the lack of an experimentally solved structure. The structure of 17beta-HSD1, on the other hand, has been solved, but no commercial drugs have emerged from the drug discovery projects reported in the literature. Computer-assisted molecular modelling is an invaluable tool in modern drug design projects. Modelling techniques can be used to generate a model of the target protein and to design novel inhibitors for them even if the target protein structure is unknown. Molecular modelling has applications in predicting the activities of theoretical inhibitors and in finding possible active inhibitors from a compound database. Inhibitor binding at atomic level can also be studied with molecular modelling. To clarify the interactions between the aromatase enzyme and its substrate and inhibitors, we generated a homology model based on a mammalian CYP450 enzyme, rabbit progesterone 21-hydroxylase CYP2C5. The model was carefully validated using molecular dynamics simulations (MDS) with and without the natural substrate ASD. Binding orientation of the inhibitors was based on the hypothesis that the inhibitors coordinate to the heme iron, and were studied using MDS. The inhibitors were dietary phytoestrogens, which have been shown to reduce the risk for breast cancer. To further validate the model, the interactions of a commercial breast cancer drug were studied with MDS and ligand–protein docking. In the case of 17beta-HSD1, a 3D QSAR model was generated on the basis of MDS of an enzyme complex with active inhibitor and ligand–protein docking, employing a compound library synthesised in our laboratory. Furthermore, four pharmacophore hypotheses with and without a bound substrate or an inhibitor were developed and used in screening a commercial database of drug-like compounds. The homology model of aromatase showed stable behaviour in MDS and was capable of explaining most of the results from mutagenesis studies. We were able to identify the active site residues contributing to the inhibitor binding, and explain differences in coordination geometry corresponding to the inhibitory activity. Interactions between the inhibitors and aromatase were in agreement with the mutagenesis studies reported for aromatase. Simulations of 17beta-HSD1 with inhibitors revealed an inhibitor binding mode with hydrogen bond interactions previously not reported, and a hydrophobic pocket capable of accommodating a bulky side chain. Pharmacophore hypothesis generation, followed by virtual screening, was able to identify several compounds that can be used in lead compound generation. The visualisation of the interaction fields from the QSAR model and the pharmacophores provided us with novel ideas for inhibitor development in our drug discovery project.
Resumo:
The importance of intermolecular interactions to chemistry, physics, and biology is difficult to overestimate. Without intermolecular forces, condensed phase matter could not form. The simplest way to categorize different types of intermolecular interactions is to describe them using van der Waals and hydrogen bonded (H-bonded) interactions. In the H-bond, the intermolecular interaction appears between a positively charged hydrogen atom and electronegative fragments and it originates from strong electrostatic interactions. H-bonding is important when considering the properties of condensed phase water and in many biological systems including the structure of DNA and proteins. Vibrational spectroscopy is a useful tool for studying complexes and the solvation of molecules. Vibrational frequency shift has been used to characterize complex formation. In an H-bonded system A∙∙∙H-X (A and X are acceptor and donor species, respectively), the vibrational frequency of the H-X stretching vibration usually decreases from its value in free H-X (red-shift). This frequency shift has been used as evidence for H-bond formation and the magnitude of the shift has been used as an indicator of the H-bonding strength. In contrast to this normal behavior are the blue-shifting H-bonds, in which the H-X vibrational frequency increases upon complex formation. In the last decade, there has been active discussion regarding these blue-shifting H-bonds. Noble-gases have been considered inert due to their limited reactivity with other elements. In the early 1930 s, Pauling predicted the stable noble-gas compounds XeF6 and KrF6. It was not until three decades later Neil Bartlett synthesized the first noble-gas compound, XePtF6, in 1962. A renaissance of noble-gas chemistry began in 1995 with the discovery of noble-gas hydride molecules at the University of Helsinki. The first hydrides were HXeCl, HXeBr, HXeI, HKrCl, and HXeH. These molecules have the general formula of HNgY, where H is a hydrogen atom, Ng is a noble-gas atom (Ar, Kr, or Xe), and Y is an electronegative fragment. At present, this class of molecules comprises 23 members including both inorganic and organic compounds. The first and only argon-containing neutral chemical compound HArF was synthesized in 2000 and its properties have since been investigated in a number of studies. A helium-containing chemical compound, HHeF, was predicted computationally, but its lifetime has been predicted to be severely limited by hydrogen tunneling. Helium and neon are the only elements in the periodic table that do not form neutral, ground state molecules. A noble-gas matrix is a useful medium in which to study unstable and reactive species including ions. A solvated proton forms a centrosymmetric NgHNg+ (Ng = Ar, Kr, and Xe) structure in a noble-gas matrix and this is probably the simplest example of a solvated proton. Interestingly, the hypothetical NeHNe+ cation is isoelectronic with the water-solvated proton H5O2+ (Zundel-ion). In addition to the NgHNg+ cations, the isoelectronic YHY- (Y = halogen atom or pseudohalogen fragment) anions have been studied with the matrix-isolation technique. These species have been known to exist in alkali metal salts (YHY)-M+ (M = alkali metal e.g. K or Na) for more than 80 years. Hydrated HF forms the FHF- structure in aqueous solutions, and these ions participate in several important chemical processes. In this thesis, studies of the intermolecular interactions of HNgY molecules and centrosymmetric ions with various species are presented. The HNgY complexes show unusual spectral features, e.g. large blue-shifts of the H-Ng stretching vibration upon complexation. It is suggested that the blue-shift is a normal effect for these molecules, and that originates from the enhanced (HNg)+Y- ion-pair character upon complexation. It is also found that the HNgY molecules are energetically stabilized in the complexed form, and this effect is computationally demonstrated for the HHeF molecule. The NgHNg+ and YHY- ions also show blue-shifts in their asymmetric stretching vibration upon complexation with nitrogen. Additionally, the matrix site structure and hindered rotation (libration) of the HNgY molecules were studied. The librational motion is a much-discussed solid state phenomenon, and the HNgY molecules embedded in noble-gas matrices are good model systems to study this effect. The formation mechanisms of the HNgY molecules and the decay mechanism of NgHNg+ cations are discussed. A new electron tunneling model for the decay of NgHNg+ absorptions in noble-gas matrices is proposed. Studies of the NgHNg+∙∙∙N2 complexes support this electron tunneling mechanism.
Resumo:
In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.
Resumo:
This thesis studies homogeneous classes of complete metric spaces. Over the past few decades model theory has been extended to cover a variety of nonelementary frameworks. Shelah introduced the abstact elementary classes (AEC) in the 1980s as a common framework for the study of nonelementary classes. Another direction of extension has been the development of model theory for metric structures. This thesis takes a step in the direction of combining these two by introducing an AEC-like setting for studying metric structures. To find balance between generality and the possibility to develop stability theoretic tools, we work in a homogeneous context, thus extending the usual compact approach. The homogeneous context enables the application of stability theoretic tools developed in discrete homogeneous model theory. Using these we prove categoricity transfer theorems for homogeneous metric structures with respect to isometric isomorphisms. We also show how generalized isomorphisms can be added to the class, giving a model theoretic approach to, e.g., Banach space isomorphisms or operator approximations. The novelty is the built-in treatment of these generalized isomorphisms making, e.g., stability up to perturbation the natural stability notion. With respect to these generalized isomorphisms we develop a notion of independence. It behaves well already for structures which are omega-stable up to perturbation and coincides with the one from classical homogeneous model theory over saturated enough models. We also introduce a notion of isolation and prove dominance for it.
Resumo:
Malli on logiikassa käytetty abstraktio monille matemaattisille objekteille. Esimerkiksi verkot, ryhmät ja metriset avaruudet ovat malleja. Äärellisten mallien teoria on logiikan osa-alue, jossa tarkastellaan logiikkojen, formaalien kielten, ilmaisuvoimaa malleissa, joiden alkioiden lukumäärä on äärellinen. Rajoittuminen äärellisiin malleihin mahdollistaa tulosten soveltamisen teoreettisessa tietojenkäsittelytieteessä, jonka näkökulmasta logiikan kaavoja voidaan ajatella ohjelmina ja äärellisiä malleja niiden syötteinä. Lokaalisuus tarkoittaa logiikan kyvyttömyyttä erottaa toisistaan malleja, joiden paikalliset piirteet vastaavat toisiaan. Väitöskirjassa tarkastellaan useita lokaalisuuden muotoja ja niiden säilymistä logiikkoja yhdistellessä. Kehitettyjä työkaluja apuna käyttäen osoitetaan, että Gaifman- ja Hanf-lokaalisuudeksi kutsuttujen varianttien välissä on lokaalisuuskäsitteiden hierarkia, jonka eri tasot voidaan erottaa toisistaan kasvavaa dimensiota olevissa hiloissa. Toisaalta osoitetaan, että lokaalisuuskäsitteet eivät eroa toisistaan, kun rajoitutaan tarkastelemaan äärellisiä puita. Järjestysinvariantit logiikat ovat kieliä, joissa on käytössä sisäänrakennettu järjestysrelaatio, mutta sitä on käytettävä siten, etteivät kaavojen ilmaisemat asiat riipu valitusta järjestyksestä. Määritelmää voi motivoida tietojenkäsittelyn näkökulmasta: vaikka ohjelman syötteen tietojen järjestyksellä ei olisi odotetun tuloksen kannalta merkitystä, on syöte tietokoneen muistissa aina jossakin järjestyksessä, jota ohjelma voi laskennassaan hyödyntää. Väitöskirjassa tutkitaan minkälaisia lokaalisuuden muotoja järjestysinvariantit ensimmäisen kertaluvun predikaattilogiikan laajennukset yksipaikkaisilla kvanttoreilla voivat toteuttaa. Tuloksia sovelletaan tarkastelemalla, milloin sisäänrakennettu järjestys lisää logiikan ilmaisuvoimaa äärellisissä puissa.
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
We consider an obstacle scattering problem for linear Beltrami fields. A vector field is a linear Beltrami field if the curl of the field is a constant times itself. We study the obstacles that are of Neumann type, that is, the normal component of the total field vanishes on the boundary of the obstacle. We prove the unique solvability for the corresponding exterior boundary value problem, in other words, the direct obstacle scattering model. For the inverse obstacle scattering problem, we deduce the formulas that are needed to apply the singular sources method. The numerical examples are computed for the direct scattering problem and for the inverse scattering problem.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
Testaus ketterissä menetelmissä (agile) on kirjallisuudessa heikosti määritelty, ja yritykset toteuttavat laatu- ja testauskäytäntöjä vaihtelevasti. Tämän tutkielman tavoitteena oli löytää malli testauksen järjestämiseen ketterissä menetelmissä. Tavoitetta lähestyttiin keräämällä kirjallisista lähteistä kokemuksia, vaihtoehtoja ja malleja. Löydettyjä tietoja verrattiin ohjelmistoyritysten käytännön ratkaisuihin ja näkemyksiin, joita saatiin suorittamalla kyselytutkimus kahdessa Scrum-prosessimallia käyttävässä ohjelmistoyrityksessä. Kirjallisuuskatsauksessa selvisi, että laatusuunnitelman ja testausstrategian avulla voidaan tunnistaa kussakin kontekstissa tarvittavat testausmenetelmät. Menetelmiä kannattaa tarkastella ja suunnitella iteratiivisten prosessien aikajänteiden (sydämenlyönti, iteraatio, julkaisu ja strateginen) avulla. Tutkimuksen suurin löytö oli, että yrityksiltä puuttui laajempi ja suunnitelmallinen näkemys testauksen ja laadun kehittämiseen. Uusien laatu- ja testaustoimenpiteiden tarvetta ei analysoitu järjestelmällisesti, olemassa olevien käyttöä ei kehitetty pitkäjänteisesti, eikä yrityksillä ollut kokonaiskuvaa tarvittavien toimenpiteiden keskinäisistä suhteista. Lisäksi tutkimuksessa selvisi, etteivät tiimit kyenneet ottamaan vastuuta laadusta, koska laatuun liittyviä toimenpiteitä tehdään iteraatioissa liian vähän. Myös Scrum-prosessimallin noudattamisessa oli korjaamisen varaa. Yritykset kuitenkin osoittivat halua ja kykyä kehittää toimintaansa ongelmien tunnistamisen jälkeen. ACM Computing Classification System (CCS 1998): D.2.5 Testing and Debugging, D.2.9 Management, K.6.1 Project and People Management, K.6.3 Software Management
Resumo:
Suomessa esitutkintaa johtaa pääsääntöisesti poliisi lukuun ottamatta poliisin tekemäksi epäiltyjä rikoksia, joissa tutkinnanjohtajana on syyttäjä. Poliisin johtama esitutkinta ei ole kansainvälisesti tyypillisin tapa järjestää poliisi- ja syyttäjäviranomaisten välinen toimivallanjako. Syyttäjän tehtävänä on rikosvastuun toteuttaminen asianosaisten oikeusturva huomioon ottaen. Syyttäjä siten viime kädessä vastaa siitä, että rikosasia on selvitetty asianmukaisesti. Suomessa syyttäjällä on tämä vastuu, mutta hänellä ei ole täysin sitä vastaavaa valtaa päättää esitutkinnan suorittamisesta ja ohjaamisesta, koska esitutkinnan johtoa ei ole säädetty hänen tehtäväkseen. Tutkielmassa pohditaan, pitäisikö syyttäjän toimia tutkinnanjohtajana Suomessa myös muissa kuin ns. poliisirikosasioissa. Kysymyksen taustoittamiseksi tutkielmassa esitellään sekä tutkinnanjohtajan että syyttäjän vastuuta ja tehtäviä esitutkinnassa nykyisen esitutkintalainsäädännön mukaan. Lainsäädäntöhistoriaa kuvaamalla hahmotetaan sitä, miten nykyiseen toimivallanjakomalliin on päädytty. Vastausta tutkimuskysymykseen haetaan esittelemällä kolme eri mallia siitä, miten toimivallanjako poliisin ja syyttäjän välillä voidaan järjestää. Ensimmäinen malli on voimassa olevaan esitutkintalakiin perustuva järjestely, jossa syyttäjä osallistuu esitutkintaan ns. esitutkintayhteistyön kautta. Toinen malli on Ruotsin lainsäädännön mukainen vastuunjako, jossa syyttäjä yksinkertaisia rikosasioita lukuun ottamatta vastaa esitutkinnan johtamisesta. Kolmas malli on Suomessa tuomioistuinlaitoksen kehittämiskomitean ehdottama järjestely, joka sijoittuu syyttäjän valtuuksien laajuuden perusteella edellisten välimaastoon. Tutkimuskysymyksen ratkaisemiseksi tutkielmassa esitellään myös oikeuskirjallisuudessa herännyttä keskustelua syyttäjän asemasta esitutkinnassa. Lisäksi aineistona on käytetty viiden eri syyttäjän haastattelua, joissa on esitetty mielipiteitä syyttäjän tutkinnanjohtajuudesta. Syyttäjän tutkinnanjohtajuuden puolesta ja vastaan puhuvia seikkoja arvioimalla tehdään johtopäätöksiä tutkimuskysymyksestä.