45 resultados para PDP-11 (Computer)
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
This thesis presents methods for locating and analyzing cis-regulatory DNA elements involved with the regulation of gene expression in multicellular organisms. The regulation of gene expression is carried out by the combined effort of several transcription factor proteins collectively binding the DNA on the cis-regulatory elements. Only sparse knowledge of the 'genetic code' of these elements exists today. An automatic tool for discovery of putative cis-regulatory elements could help their experimental analysis, which would result in a more detailed view of the cis-regulatory element structure and function. We have developed a computational model for the evolutionary conservation of cis-regulatory elements. The elements are modeled as evolutionarily conserved clusters of sequence-specific transcription factor binding sites. We give an efficient dynamic programming algorithm that locates the putative cis-regulatory elements and scores them according to the conservation model. A notable proportion of the high-scoring DNA sequences show transcriptional enhancer activity in transgenic mouse embryos. The conservation model includes four parameters whose optimal values are estimated with simulated annealing. With good parameter values the model discriminates well between the DNA sequences with evolutionarily conserved cis-regulatory elements and the DNA sequences that have evolved neutrally. In further inquiry, the set of highest scoring putative cis-regulatory elements were found to be sensitive to small variations in the parameter values. The statistical significance of the putative cis-regulatory elements is estimated with the Two Component Extreme Value Distribution. The p-values grade the conservation of the cis-regulatory elements above the neutral expectation. The parameter values for the distribution are estimated by simulating the neutral DNA evolution. The conservation of the transcription factor binding sites can be used in the upstream analysis of regulatory interactions. This approach may provide mechanistic insight to the transcription level data from, e.g., microarray experiments. Here we give a method to predict shared transcriptional regulators for a set of co-expressed genes. The EEL (Enhancer Element Locator) software implements the method for locating putative cis-regulatory elements. The software facilitates both interactive use and distributed batch processing. We have used it to analyze the non-coding regions around all human genes with respect to the orthologous regions in various other species including mouse. The data from these genome-wide analyzes is stored in a relational database which is used in the publicly available web services for upstream analysis and visualization of the putative cis-regulatory elements in the human genome.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.
Resumo:
Läpileikkaava näkökulma on tietokoneohjelman toteutukseen liittyvä vaatimus, jota ei voida toteuttaa käytetyllä ohjelmointikielellä omaan ohjelmayksikköön, vaan sen toteutus hajaantuu useisiin ohjelmayksiköihin. Aspektiohjelmointi on uusi ohjelmointiparadigma, jolla läpileikkaava näkökulma voidaan toteuttaa omaan ohjelmayksikköön, aspektiin. Aspekti kapseloi näkökulman toteutuksen neuvon ja liitoskohtamäärityksen avulla. Neuvo sisältää näkökulman toteuttavan ohjelmakoodin ja liitoskohtamääritys valitsee ne ohjelman liitoskohdat, joihin ohjelmakoodi liitetään. Nykyisillä aspektikielillä voidaan valita liitoskohtia pääasiassa niiden syntaktisten ominaisuuksien, kuten nimen ja sijainnin, perusteella. Syntaksiin sidoksissa olevat liitoskohtamääritykset ovat hauraita, sillä ohjelmaan tehdyt muutokset voivat rikkoa syntaksista riippuvia liitoskohtamäärityksiä, vaikka itse liitoskohtamäärityksiin ei tehtäisi muutoksia. Tätä ongelmaa kutsutaan hauraan liitoskohtamäärityksen ongelmaksi. Ongelma on merkittävä, koska hauraat liitoskohtamääritykset vaikeuttavat ohjelman kehitettävyyttä ja ylläpidettävyyttä. Tässä tutkielmassa perehdytään hauraan liitoskohtamäärityksen ongelmaan ja siihen esitettyihin ratkaisuihin. Tutkielmassa näytetään, että ongelmaan ei ole tällä hetkellä kunnollista ratkaisua.
Resumo:
Testaus ketterissä menetelmissä (agile) on kirjallisuudessa heikosti määritelty, ja yritykset toteuttavat laatu- ja testauskäytäntöjä vaihtelevasti. Tämän tutkielman tavoitteena oli löytää malli testauksen järjestämiseen ketterissä menetelmissä. Tavoitetta lähestyttiin keräämällä kirjallisista lähteistä kokemuksia, vaihtoehtoja ja malleja. Löydettyjä tietoja verrattiin ohjelmistoyritysten käytännön ratkaisuihin ja näkemyksiin, joita saatiin suorittamalla kyselytutkimus kahdessa Scrum-prosessimallia käyttävässä ohjelmistoyrityksessä. Kirjallisuuskatsauksessa selvisi, että laatusuunnitelman ja testausstrategian avulla voidaan tunnistaa kussakin kontekstissa tarvittavat testausmenetelmät. Menetelmiä kannattaa tarkastella ja suunnitella iteratiivisten prosessien aikajänteiden (sydämenlyönti, iteraatio, julkaisu ja strateginen) avulla. Tutkimuksen suurin löytö oli, että yrityksiltä puuttui laajempi ja suunnitelmallinen näkemys testauksen ja laadun kehittämiseen. Uusien laatu- ja testaustoimenpiteiden tarvetta ei analysoitu järjestelmällisesti, olemassa olevien käyttöä ei kehitetty pitkäjänteisesti, eikä yrityksillä ollut kokonaiskuvaa tarvittavien toimenpiteiden keskinäisistä suhteista. Lisäksi tutkimuksessa selvisi, etteivät tiimit kyenneet ottamaan vastuuta laadusta, koska laatuun liittyviä toimenpiteitä tehdään iteraatioissa liian vähän. Myös Scrum-prosessimallin noudattamisessa oli korjaamisen varaa. Yritykset kuitenkin osoittivat halua ja kykyä kehittää toimintaansa ongelmien tunnistamisen jälkeen. ACM Computing Classification System (CCS 1998): D.2.5 Testing and Debugging, D.2.9 Management, K.6.1 Project and People Management, K.6.3 Software Management
Resumo:
Place identification is the methodology of automatically detecting spatial regions or places that are meaningful to a user by analysing her location traces. Following this approach several algorithms have been proposed in the literature. Most of the algorithms perform well on a particular data set with suitable choice of parameter values. However, tuneable parameters make it difficult for an algorithm to generalise to data sets collected from different geographical locations, different periods of time or containing different activities. This thesis compares the generalisation performance of our proposed DPCluster algorithm along with six state-of-the-art place identification algorithms on twelve location data sets collected using Global Positioning System (GPS). Spatial and temporal variations present in the data help us to identify strengths and weaknesses of the place identification algorithms under study. We begin by discussing the notion of a place and its importance in location-aware computing. Next, we discuss different phases of the place identification process found in the literature followed by a thorough description of seven algorithms. After that, we define evaluation metrics and compare generalisation performance of individual place identification algorithms and report the results. The results indicate that the DPCluster algorithm performs superior to all other algorithms in terms of generalisation performance.
Resumo:
Tutkielmassa esitellään idean kontekstin kuvaaminen keinona tehostaa ideoiden välittymistä. Kontekstitieto kuvataan dokumentteihin liittyvänä metatietona, jota hallitaan dokumenteista riippumattomissa metatietokannoissa. Päämääränä pidetään sellaista idean kontekstin kuvausta, joka on riittävän ilmaisuvoimainen, mutta jonka luominen ei aseta järjestelmän käyttäjille ylivoimaista työtaakkaa. Tiedon välittyminen nähdään prosessina, johon perustuen idean konteksti jaetaan tuottokontekstiin, julkaisukontekstiin ja käyttökontekstiin. Tähän jakoon perustuen käsitellään metatiedon muodostaminen ja sisältö yksityiskohtaisesti yksittäisten metatietotietueen attribuuttien tasolla. Kontekstitiedon käyttökohteista tarkastellaan kontekstin visualisointia informaation visualisoinnin tekniikoihin perustuen, idean arvon mittaamista bibliometrisiä menetelmiä kehittämällä ja automaattista ideoiden valintaa tiedon suodatuksen menetelmien ja digitaalisten assistenttien avulla.
Resumo:
Tosiaikainen tietovarasto on keskitetty tietokantajärjestelmä pehmeitä tosiaikaisia liiketoimintatiedon hallintasovelluksia varten. Näiden sovellusten perusvaatimuksena on tuoreen tiedon jatkuva saatavuus. Työssä käsitellään tosiaikaisen tietovaraston suunnittelua, tietovaraston jatkuvan ylläpidon eri vaiheita sekä näihin vaiheisiin soveltuvia menetelmiä. Tarkoitus on tuoda esiin kompromisseja, joita väistämättä joudutaan tekemään tietovaraston kyselytehokkuuden, viiveen ja jatkuvan saatavuuden välillä. Johtopäätöksenä suositellaan sitä suurempaa varovaisuutta mitä pienempiä viiveitä tavoitellaan. Liiketoimintatiedon hallintasovellusten tosiaikaisuus on ominaisuus, jota käyttäjät tavallisesti haluavat enemmän kuin tarvitsevat. Joissakin tapauksissa tosiaikaisuus on suorastaan haitallista. Mutta jos tosiaikainen tieto on välttämätöntä, samanaikaisia käyttäjiä on paljon, ja tarvittavat tiedot pitää yhdistää useasta lähdejärjestelmästä, niin tosiaikaiselle tietovarastoinnille ei ole kelvollista vaihtoehtoa. Tällöinkin riittää, että jatkuvasti ylläpidetään vain pientä osaa koko tietovarastosta.
Resumo:
Perinteisillä tiedonhakumenetelmillä ei aina tavoiteta riittävän hyvin tekstien merkitystasoa. Tutkielman aiheena olevan semanttisen tiedonhaun tarkoituksena onkin päästä paremmin kä-siksi sanojen ilmaisemiin merkityksiin. Tämä tapahtuu käyttämällä hyväksi itse tekstiin tai sen esitys-/tallennusrakenteisiin tuotettua semanttista metatietoa. Tutkielmassa tarkastellaan lähemmin kahteen ryhmään kuuluvia semanttisia hakumenetelmiä. Toisen ryhmän muodostavat XML-tekstidokumenttien ominaisuuksia hyödyntävät, toisen taas semanttisen webin mahdollisuuksiin perustuvat järjestelmät. Lisäksi tutkielmassa luonnostellaan ideaalinen semanttinen tiedonhakujärjestelmä, johon tarkasteltuja järjestelmiä verrataan. Vertailussa todetaan, että lähes kaikki ideaalisen hakujärjestelmän piirteet tulevat jossain muodossa toteutetuiksi, joskaan eivät yhdessäkään järjestelmässä samalla kertaa. Semanttisilta hakuominaisuuksiltaan monipuolisimmaksi osoittautuu XML-perustainen SphereSearch-hakukone, joka esimerkiksi sallii käsitehaut ja kykenee muodostamaan vastauselementeistä dokumenttirajat ylittäviä kokonaisuuksia. Toisaalta kaikki tarkastellut järjestelmät noudattavat semanttisen tiedonhaun perusperiaatetta, jonka mukaan etsityn merkityssisällön tavoittamiseksi ei riitä pelkkä hakutermien paikallisten esiintymien löytäminen kohdeaineistosta. Tyypillisimmin periaate on toteutettu ottamalla tiedollisen yksikön (XML-elementin tai semanttisen webin ontologian mukaisen ilmentymäsolmun) relevanssia arvioitaessa huomioon myös siihen rakenteellisesti kytkeytyneiden yksiköiden sisältö ja näiden kytkösten laatu.
Resumo:
With the proliferation of wireless and mobile devices equipped with multiple radio interfaces to connect to the Internet, vertical handoff involving different wireless access technologies will enable users to get the best of connectivity and service quality during the lifetime of a TCP connection. A vertical handoff may introduce an abrupt, significant change in the access link characteristics and as a result the end-to-end path characteristics such as the bandwidth and the round-trip time (RTT) of a TCP connection may change considerably. TCP may take several RTTs to adapt to these changes in path characteristics and during this interval there may be packet losses and / or inefficient utilization of the available bandwidth. In this thesis we study the behaviour and performance of TCP in the presence of a vertical handoff. We identify the different handoff scenarios that adversely affect TCP performance. We propose several enhancements to the TCP sender algorithm that are specific to the different handoff scenarios to adapt TCP better to a vertical handoff. Our algorithms are conservative in nature and make use of cross-layer information obtained from the lower layers regarding the characteristics of the access links involved in a handoff. We evaluate the proposed algorithms by extensive simulation of the various handoff scenarios involving access links with a wide range of bandwidth and delay. We show that the proposed algorithms are effective in improving the TCP behaviour in various handoff scenarios and do not adversely affect the performance of TCP in the absence of cross-layer information.
Resumo:
In most non-mammalian vertebrates, such as fish and reptiles, teeth are replaced continuously. However, tooth replacement in most mammals, including human, takes place only once and further renewal is apparently inhibited. It is not known how tooth replacement is genetically regulated, and little is known on the physiological mechanism and evolutionary reduction of tooth replacement in mammals. In this study I have attempted to address these questions. In a rare human condition cleidocranial dysplasia, caused by a mutation in a Runt domain transcription factor Runx2, tooth replacement is continued. Runx2 mutant mice were used to investigate the molecular mechanisms of Runx2 function. Microarray analysis from dissected embryonic day 14 Runx2 mutant and wild type dental mesenchymes revealed many downstream targets of Runx2, which were validated using in situ hybridization and tissue culture methods. Wnt signaling inhibitor Dkk1 was identified as a candidate target, and in tissue culture conditions it was shown that Dkk1 is induced by FGF4 and this induction is Runx2 dependent. These experiments demonstrated a connection between Runx2, FGF and Wnt signaling in tooth development and possibly also in tooth replacement. The role of Wnt signaling in tooth replacement was further investigated by using a transgenic mouse model where Wnt signaling mediator β-catenin is continuously stabilized in dental epithelium. This stabilization led to activated Wnt signaling and to the formation of multiple enamel knots. In vitro and transplantation experiments were performed to examine the process of extra tooth formation. We showed that new teeth were continuously generated and that new teeth form from pre-existing teeth. A morphodynamic activator-inhibitor model was used to simulate enamel knot formation. By increasing the intrinsic production rate of the activator (β-catenin), the multiple enamel knot phenotype was reproduced by computer simulations. It was thus concluded that β-catenin acts as an upstream activator of enamel knots, closely linking Wnt signaling to the regulation of tooth renewal. As mice do not normally replace teeth, we used other model animals to investigate the physiological and genetic mechanisms of tooth replacement. Sorex araneus, the common shrew was earlier reported to have non-functional tooth replacement in all antemolar tooth positions. We showed by histological and gene expression studies that there is tooth replacement only in one position, the premolar 4 and that the deciduous tooth is diminished in size and disappears during embryogenesis without becoming functional. The growth rates of deciduous and permanent premolar 4 were measured and it was shown by competence inference that the early initiation of the replacement tooth in relation to the developmental stage of the deciduous tooth led to the inhibition of deciduous tooth morphogenesis. It was concluded that the evolutionary loss of deciduous teeth may involve the early activation of replacement teeth, which in turn suppress their predecessors. Mustela putorius furo, the ferret, has a dentition that resembles that of the human as ferrets have teeth that belong to all four tooth families, and all the antemolar teeth are replaced once. To investigate the replacement mechanism, histological serial sections from different embryonic stages were analyzed. It was noticed that tooth replacement is a process which involves the growth and detachment of the dental lamina from the lingual cervical loop of the deciduous tooth. Detachment of the deciduous tooth leads to a free successional dental lamina, which grows deeper into the mesenchyme, and later buds the replacement tooth. A careful 3D analysis of serial histological sections was performed and it was shown that replacement teeth are initiated from the successional dental lamina and not from the epithelium of the deciduous tooth. The molecular regulation of tooth replacement was studied and it was shown by examination of expression patterns of candidate regulatory genes that BMP/Wnt inhibitor Sostdc1 was strongly expressed in the buccal aspect of the dental lamina, and in the intersection between the detaching deciduous tooth and the successional dental lamina, suggesting a role for Sostdc1 in the process of detachment. Shh was expressed in the enamel knot and in the inner enamel epithelium in both generations of teeth supporting the view that the morphogenesis of both generations of teeth is regulated by similar mechanisms. In summary, histological and molecular studies on different model animals and transgenic mouse models were used to investigate tooth replacement. This thesis work has significantly contributed to the knowledge on the physiological mechanisms and molecular regulation of tooth replacement and its evolutionary suppression in mammals.