986 resultados para Seán Ó Ríordáin
Resumo:
Tutkielmassa tarkastellaan yhteistoiminnallisen oppimisen soveltamismahdollisuuksia lukion pitkän matematiikan opettamiseen. Tutkielmassa on suunniteltu yhteistoiminnallinen opetuspaketti lukion pitkän matematiikan Polynomifunktiot-kurssille. Tutkielman ensimmäisessä teoriaosassa tarkastellaan yhteistoiminnallisen oppimisen periaatteita ja esitellään neljä yhteistoiminnallisen oppimisen menetelmää: rakenteellinen lähestymistapa, tiimioppiminen ryhmässä, ryhmäavusteinen yksilöllistäminen ja palapeli. Lisäksi teoriaosassa tuodaan esille niitä tekijöitä, jotka opettajan täytyy huomioida soveltaessaan esiteltyjä menetelmiä omaan opetukseensa sekä suunnitellessaan tehtäväkokonaisuuksia. Lopuksi tarkastellaan aiemmin tehtyjä tutkimuksia yhteistoiminnallisen oppimisen soveltamisesta matematiikan opetukseen. Tutkielman toisessa teoriaosassa esitellään klassisen algebran historiaa ja polynomifunktioihin liittyviä määritelmiä, lauseita ja niiden todistuksia sekä esimerkkejä. Opetuspaketti koostuu neljästä tehtäväkokonaisuudesta: epäyhtälön ratkaiseminen, polynomilaskennan kertaus, toisen asteen polynomifunktio ja -yhtälö sekä korkeamman asteen polynomifunktioiden tutkiminen. Lisäksi opetuspaketissa on yleinen esimerkki yhteistoiminnallisen oppitunnin rakenteesta. Opetuspaketin lopussa on raportti eräässä lukiossa suoritetusta testauksesta sekä sen tuloksista. Aiempien tutkimusten sekä tämän tutkielman yhteydessä tehdyn testauksen perusteella voidaan sanoa, että yhteistoiminnallinen oppiminen soveltuu lukion matematiikan opetukseen.
Resumo:
Matematiikan opetuksen kehittämiseen korkeakoulutasolla on monia tapoja. Tavoitteena on parantaa opiskelijoiden opiskelukokemuksia, jotta he oppisivat paremmin. Oppimisen arvioinnin on todettu vaikuttavan oppimiseen merkittävästi. Arviointi tapahtuu yleensä sen perusteella, kuinka hyvin opiskelija menestyy kokeissa. Näihin kokeisiin liittyy kuitenkin useita ongelmia; ne koostuvat usein muutamasta tehtävästä, eivätkä siten kata koko koealuetta. Lisäksi perinteinen koetilanne on kaukana siitä ympäristöstä, jossa opittuja taitoja on tarkoitus käyttää. Tässä työssä tutkittiin Aalto-yliopiston Teknillisen korkeakoulun kurssin Diskreetin matematiikan perusteet (DMP) arviointikäytännön uudistamista. Kurssi toteutettiin sulautuvan oppimisen mallin mukaisesti osin verkossa. Arvioinnissa painotettiin jatkuvaa harjoitustehtävien tekemistä ja suurin osa näistä tehtävistä toteutetiin tietokoneavusteisina verkkotehtävinä. Käytössä oli automaattisen tarkistamisen mahdollistava STACK-järjestelmä. Työ jakaantui kahteen osaan: arvioinnissa käytettävien STACK-tehtävien laatimiseen ja empiiriseen osuuteen, jossa tutkittiin kurssin onnistumista. Tutkimuksessa keskityttiin toisaalta siihen, miten käytetty arviointimenetelmä toimi ja toisaalta siihen, millaiseksi opiskelijat menetelmän kokivat. Kurssia varten toteutettiin yhteensä 67 STACK-tehtävää, joista 46 oli käytössä kurssilla. Lisäksi kurssilla oli 26 perinteistä kirjallista tehtävää. Käytetyn arviointimenetelmän toimivuutta tutkittiin vertaamalla kurssin tuloksia vuosien 2008 ja 2009 DMP-kurssien tuloksiin. Vertailun perusteella huomattiin, että opiskelijat olivat vuonna 2010 ratkaisseet selvästi enemmän harjoitustehtäviä kuin edellisinä vuosina. Myös arvosanan 0 prosentuaalinen osuus suhteessa kaikkiin annettuihin arvosanoihin pieneni. Opiskelijoiden kokemuksien tutkimista varten laadittiin kurssikokemuskysely. Kyselyssä esitettiin väittämiä liittyen STACK-tehtävien laatuun, tavoitteiden ja vaatimusten selkeyteen, arvioinnin asianmukaisuuteen, työmäärän asianmukaisuuteen, opiskelijoiden sitoutuneisuuteen, käytännön järjestelyihin ja sulautuvaan oppimiseen liittyen. Tulokset olivat erittäin positiivisia. Kaikenkaikkiaan kokeilukurssi sujui hyvin; arvointimenetelmä toimi ja opiskelijat olivat tyytyväisiä. Vertailun ja kyselyn perusteella tärkeimmiksi kehityksen kohteiksi nousivat STACK-tehtävien automaattinen palaute, perinteisten tehtävien pisteyttäminen ja jako perinteisten tehtävien ja STACK-tehtävien välillä.
Resumo:
Deskriptiivisessä vaativuusteoriassa tutkitaan laskennan vaativuuteen liittyviä kysymyksiä logiikan työkalujen avulla. Tällöin käsitellään tilannetta, jossa laskennan syötteenä toimivat äärelliset mallit. Tässä kehyksessä erinäisiä vaativuusluokkia voidaan karakterisoida etsimällä logiikoita, joilla on kyseistä vaativuusluokkaa vastaava ilmaisuvoima. Klassiset esimerkit tällaisista tuloksista ovat Faginin esittämä epädeterministisen polynomiaalisen ajan karakterisaatio logiikan Σ_1^1 avulla ja Immermanin, Livchakin ja Vardin esittämä deterministisen polynomiaalisen ajan karakterisaatio ensimmäisen kertaluvun inflatorisen kiintopistelogiikan avulla. Tässä opinnäytetyössä tarkastellaan Gurevichin esittämää kysymystä polynomiaalisessa ajassa ratkeavien kielten luokan P vahvasta loogisesta karakterisaatiosta. Kyseinen kysymys on yksi äärellisen malliteorian haastavimpia ongelmia. Kysymyksen esittelyyn tarvittavan peruskoneiston läpikäynnin lisäksi tässä käsi- tellään myös sen yhteyksiä laskennan vaativuusteoriassa keskeiseen P-NP-ongelmaan. Gurevichin kysymyksestä voidaan esittää myös rajoitetumpia versioita, mikäli käsitellään tilannetta, jossa laskennan syötteenä voi olla vain kiinnitetyn malliluokan K malleja. Tällöin luokan P karakterisointi helpottuu, ainakin jos luokka K on riittävän suppea. Tässä opinnäytetyössä käydään läpi Grohen esittämä tulos siitä, että mikäli luokaksi K valitaan 3-yhtenäisten tasoverkkojen luokka, niin ensimmäisen kertaluvun inflatorinen kiintopistelogiikka karakterisoi polynomiaalisessa ajassa laskettavat kielet.
Resumo:
Many problems in analysis have been solved using the theory of Hodge structures. P. Deligne started to treat these structures in a categorical way. Following him, we introduce the categories of mixed real and complex Hodge structures. Category of mixed Hodge structures over the field of real or complex numbers is a rigid abelian tensor category, and in fact, a neutral Tannakian category. Therefore it is equivalent to the category of representations of an affine group scheme. The direct sums of pure Hodge structures of different weights over real or complex numbers can be realized as a representation of the torus group, whose complex points is the Cartesian product of two punctured complex planes. Mixed Hodge structures turn out to consist of information of a direct sum of pure Hodge structures of different weights and a nilpotent automorphism. Therefore mixed Hodge structures correspond to the representations of certain semidirect product of a nilpotent group and the torus group acting on it.
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.
Resumo:
The ever expanding growth of the wireless access to the Internet in recent years has led to the proliferation of wireless and mobile devices to connect to the Internet. This has created the possibility of mobile devices equipped with multiple radio interfaces to connect to the Internet using any of several wireless access network technologies such as GPRS, WLAN and WiMAX in order to get the connectivity best suited for the application. These access networks are highly heterogeneous and they vary widely in their characteristics such as bandwidth, propagation delay and geographical coverage. The mechanism by which a mobile device switches between these access networks during an ongoing connection is referred to as vertical handoff and it often results in an abrupt and significant change in the access link characteristics. The most common Internet applications such as Web browsing and e-mail make use of the Transmission Control Protocol (TCP) as their transport protocol and the behaviour of TCP depends on the end-to-end path characteristics such as bandwidth and round-trip time (RTT). As the wireless access link is most likely the bottleneck of a TCP end-to-end path, the abrupt changes in the link characteristics due to a vertical handoff may affect TCP behaviour adversely degrading the performance of the application. The focus of this thesis is to study the effect of a vertical handoff on TCP behaviour and to propose algorithms that improve the handoff behaviour of TCP using cross-layer information about the changes in the access link characteristics. We begin this study by identifying the various problems of TCP due to a vertical handoff based on extensive simulation experiments. We use this study as a basis to develop cross-layer assisted TCP algorithms in handoff scenarios involving GPRS and WLAN access networks. We then extend the scope of the study by developing cross-layer assisted TCP algorithms in a broader context applicable to a wide range of bandwidth and delay changes during a handoff. And finally, the algorithms developed here are shown to be easily extendable to the multiple-TCP flow scenario. We evaluate the proposed algorithms by comparison with standard TCP (TCP SACK) and show that the proposed algorithms are effective in improving TCP behavior in vertical handoff involving a wide range of bandwidth and delay of the access networks. Our algorithms are easy to implement in real systems and they involve modifications to the TCP sender algorithm only. The proposed algorithms are conservative in nature and they do not adversely affect the performance of TCP in the absence of cross-layer information.
Resumo:
Segmentation is a data mining technique yielding simplified representations of sequences of ordered points. A sequence is divided into some number of homogeneous blocks, and all points within a segment are described by a single value. The focus in this thesis is on piecewise-constant segments, where the most likely description for each segment and the most likely segmentation into some number of blocks can be computed efficiently. Representing sequences as segmentations is useful in, e.g., storage and indexing tasks in sequence databases, and segmentation can be used as a tool in learning about the structure of a given sequence. The discussion in this thesis begins with basic questions related to segmentation analysis, such as choosing the number of segments, and evaluating the obtained segmentations. Standard model selection techniques are shown to perform well for the sequence segmentation task. Segmentation evaluation is proposed with respect to a known segmentation structure. Applying segmentation on certain features of a sequence is shown to yield segmentations that are significantly close to the known underlying structure. Two extensions to the basic segmentation framework are introduced: unimodal segmentation and basis segmentation. The former is concerned with segmentations where the segment descriptions first increase and then decrease, and the latter with the interplay between different dimensions and segments in the sequence. These problems are formally defined and algorithms for solving them are provided and analyzed. Practical applications for segmentation techniques include time series and data stream analysis, text analysis, and biological sequence analysis. In this thesis segmentation applications are demonstrated in analyzing genomic sequences.
Resumo:
In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.
Resumo:
XML documents are becoming more and more common in various environments. In particular, enterprise-scale document management is commonly centred around XML, and desktop applications as well as online document collections are soon to follow. The growing number of XML documents increases the importance of appropriate indexing methods and search tools in keeping the information accessible. Therefore, we focus on content that is stored in XML format as we develop such indexing methods. Because XML is used for different kinds of content ranging all the way from records of data fields to narrative full-texts, the methods for Information Retrieval are facing a new challenge in identifying which content is subject to data queries and which should be indexed for full-text search. In response to this challenge, we analyse the relation of character content and XML tags in XML documents in order to separate the full-text from data. As a result, we are able to both reduce the size of the index by 5-6\% and improve the retrieval precision as we select the XML fragments to be indexed. Besides being challenging, XML comes with many unexplored opportunities which are not paid much attention in the literature. For example, authors often tag the content they want to emphasise by using a typeface that stands out. The tagged content constitutes phrases that are descriptive of the content and useful for full-text search. They are simple to detect in XML documents, but also possible to confuse with other inline-level text. Nonetheless, the search results seem to improve when the detected phrases are given additional weight in the index. Similar improvements are reported when related content is associated with the indexed full-text including titles, captions, and references. Experimental results show that for certain types of document collections, at least, the proposed methods help us find the relevant answers. Even when we know nothing about the document structure but the XML syntax, we are able to take advantage of the XML structure when the content is indexed for full-text search.
Resumo:
Wireless network access is gaining increased heterogeneity in terms of the types of IP capable access technologies. The access network heterogeneity is an outcome of incremental and evolutionary approach of building new infrastructure. The recent success of multi-radio terminals drives both building a new infrastructure and implicit deployment of heterogeneous access networks. Typically there is no economical reason to replace the existing infrastructure when building a new one. The gradual migration phase usually takes several years. IP-based mobility across different access networks may involve both horizontal and vertical handovers. Depending on the networking environment, the mobile terminal may be attached to the network through multiple access technologies. Consequently, the terminal may send and receive packets through multiple networks simultaneously. This dissertation addresses the introduction of IP Mobility paradigm into the existing mobile operator network infrastructure that have not originally been designed for multi-access and IP Mobility. We propose a model for the future wireless networking and roaming architecture that does not require revolutionary technology changes and can be deployed without unnecessary complexity. The model proposes a clear separation of operator roles: (i) access operator, (ii) service operator, and (iii) inter-connection and roaming provider. The separation allows each type of an operator to have their own development path and business models without artificial bindings with each other. We also propose minimum requirements for the new model. We present the state of the art of IP Mobility. We also present results of standardization efforts in IP-based wireless architectures. Finally, we present experimentation results of IP-level mobility in various wireless operator deployments.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.
Resumo:
Place identification refers to the process of analyzing sensor data in order to detect places, i.e., spatial areas that are linked with activities and associated with meanings. Place information can be used, e.g., to provide awareness cues in applications that support social interactions, to provide personalized and location-sensitive information to the user, and to support mobile user studies by providing cues about the situations the study participant has encountered. Regularities in human movement patterns make it possible to detect personally meaningful places by analyzing location traces of a user. This thesis focuses on providing system level support for place identification, as well as on algorithmic issues related to the place identification process. The move from location to place requires interactions between location sensing technologies (e.g., GPS or GSM positioning), algorithms that identify places from location data and applications and services that utilize place information. These interactions can be facilitated using a mobile platform, i.e., an application or framework that runs on a mobile phone. For the purposes of this thesis, mobile platforms automate data capture and processing and provide means for disseminating data to applications and other system components. The first contribution of the thesis is BeTelGeuse, a freely available, open source mobile platform that supports multiple runtime environments. The actual place identification process can be understood as a data analysis task where the goal is to analyze (location) measurements and to identify areas that are meaningful to the user. The second contribution of the thesis is the Dirichlet Process Clustering (DPCluster) algorithm, a novel place identification algorithm. The performance of the DPCluster algorithm is evaluated using twelve different datasets that have been collected by different users, at different locations and over different periods of time. As part of the evaluation we compare the DPCluster algorithm against other state-of-the-art place identification algorithms. The results indicate that the DPCluster algorithm provides improved generalization performance against spatial and temporal variations in location measurements.
Resumo:
The TCP protocol is used by most Internet applications today, including the recent mobile wireless terminals that use TCP for their World-Wide Web, E-mail and other traffic. The recent wireless network technologies, such as GPRS, are known to cause delay spikes in packet transfer. This causes unnecessary TCP retransmission timeouts. This dissertation proposes a mechanism, Forward RTO-Recovery (F-RTO) for detecting the unnecessary TCP retransmission timeouts and thus allow TCP to take appropriate follow-up actions. We analyze a Linux F-RTO implementation in various network scenarios and investigate different alternatives to the basic algorithm. The second part of this dissertation is focused on quickly adapting the TCP's transmission rate when the underlying link characteristics change suddenly. This can happen, for example, due to vertical hand-offs between GPRS and WLAN wireless technologies. We investigate the Quick-Start algorithm that, in collaboration with the network routers, aims to quickly probe the available bandwidth on a network path, and allow TCP's congestion control algorithms to use that information. By extensive simulations we study the different router algorithms and parameters for Quick-Start, and discuss the challenges Quick-Start faces in the current Internet. We also study the performance of Quick-Start when applied to vertical hand-offs between different wireless link technologies.