45 resultados para programming language processing
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
Tässä työssä esiteltiin Android laitteisto- ja sovellusalustana sekä kuvattiin, kuinka Android-pelisovelluksen käyttöliittymä voidaan pitää yhtenäisenä eri näyttölaitteilla skaalauskertoimien ja ankkuroinnin avulla. Toisena osiona työtä käsiteltiin yksinkertaisia tapoja, joilla pelisovelluksien suorituskykyä voidaan parantaa. Näistä tarkempiin mittauksiin valittiin matalatarkkuuksinen piirtopuskuri ja näkymättömissä olevien kappaleiden piilotus. Mittauksissa valitut menetelmät vaikuttivat demosovelluksen suorituskykyyn huomattavasti. Tässä työssä rajauduttiin Android-ohjelmointiin Java-kielellä ilman ulkoisia kirjastoja, jolloin työn tuloksia voi helposti hyödyntää mahdollisimman monessa eri käyttökohteessa.
Resumo:
This thesis reports investigations on applying the Service Oriented Architecture (SOA) approach in the engineering of multi-platform and multi-devices user interfaces. This study has three goals: (1) analyze the present frameworks for developing multi-platform and multi-devices applications, (2) extend the principles of SOA for implementing a multi-platform and multi-devices architectural framework (SOA-MDUI), (3) applying and validating the proposed framework in the context of a specific application. One of the problems addressed in this ongoing research is the large amount of combinations for possible implementations of applications on different types of devices. Usually it is necessary to take into account the operating system (OS), user interface (UI) including the appearance, programming language (PL) and architectural style (AS). Our proposed approach extended the principles of SOA using patterns-oriented design and model-driven engineering approaches. Synthesizing the present work done in these domains, this research built and tested an engineering framework linking Model-driven Architecture (MDA) and SOA approaches to developing of UI. This study advances general understanding of engineering, deploying and managing multi-platform and multi-devices user interfaces as a service.
Resumo:
This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
Ohjelmoinnin opettaminen yleissivistävänä oppiaineena on viime aikoina herättänyt kiinnostusta Suomessa ja muualla maailmassa. Esimerkiksi Suomen opetushallituksen määrittämien, vuonna 2016 käyttöön otettavien peruskoulun opintosuunnitelman perusteiden mukaan, ohjelmointitaitoja aletaan opettaa suomalaisissa peruskouluissa ensimmäiseltä luokalta alkaen. Ohjelmointia ei olla lisäämässä omaksi oppiaineekseen, vaan sen opetuksen on tarkoitus tapahtua muiden oppiaineiden, kuten matematiikan yhteydessä. Tämä tutkimus käsittelee yleissivistävää ohjelmoinnin opetusta yleisesti, käy läpi yleisimpiä haasteita ohjelmoinnin oppimisessa ja tarkastelee erilaisten opetusmenetelmien soveltuvuutta erityisesti nuorten oppilaiden opettamiseen. Tutkimusta varten toteutettiin verkkoympäristössä toimiva, noin 9–12-vuotiaille oppilaille suunnattu graafista ohjelmointikieltä ja visuaalisuutta tehokkaasti hyödyntävä oppimissovellus. Oppimissovelluksen avulla toteutettiin alakoulun neljänsien luokkien kanssa vertailututkimus, jossa graafisella ohjelmointikielellä tapahtuvan opetuksen toimivuutta vertailtiin toiseen opetusmenetelmään, jossa oppilaat tutustuivat ohjelmoinnin perusteisiin toiminnallisten leikkien avulla. Vertailututkimuksessa kahden neljännen luokan oppilaat suorittivat samankaltaisia, ohjelmoinnin peruskäsitteisiin liittyviä ohjelmointitehtäviä molemmilla opetus-menetelmillä. Tutkimuksen tavoitteena oli selvittää alakouluoppilaiden nykyistä ohjelmointiosaamista, sitä minkälaisen vastaanoton ohjelmoinnin opetus alakouluoppilailta saa, onko erilaisilla opetusmenetelmillä merkitystä opetuksen toteutuksen kannalta ja näkyykö eri opetusmenetelmillä opetettujen luokkien oppimistuloksissa eroja. Oppilaat suhtautuivat kumpaankin opetusmenetelmään myönteisesti, ja osoittivat kiinnostusta ohjelmoinnin opiskeluun. Sisällöllisesti oppitunneille oli varattu turhan paljon materiaalia, mutta esimerkiksi yhden keskeisimmän aiheen, eli toiston käsitteen oppimisessa aktiivisilla leikeillä harjoitellut luokka osoitti huomattavasti graafisella ohjelmointikielellä harjoitellutta luokkaa parempaa osaamista oppitunnin jälkeen. Ohjelmakoodin peräkkäisyyteen liittyvä osaaminen oli neljäsluokkalaisilla hyvin hallussa jo ennen ohjelmointiharjoituksia. Aiheeseen liittyvän taustatutkimuksen ja luokkien opettajien haastatteluiden perusteella havaittiin koulujen valmiuksien opetussuunnitelmauudistuksen mukaiseen ohjelmoinnin opettamiseen olevan vielä heikolla tasolla.
Resumo:
Lappeenrannan teknillinen yliopisto tutkii pientasajännitesähkön käyttöä. Yliopisto on rakennuttanut Järvi-Suomen Energia Oy:n ja Suur-Savon Sähkö Oy:n kanssa yhteistyössä kokeellisen pientasajännitesähköverkon, jolla pystytään tarjoamaan kenttäolosuhteet pienjännitetutkimukselle todellisilla asiakkailla ja todentaa LVDC-teknologiaa ja muita älykkään sähköverkon toimintoja kenttäolosuhteissa. Verkon tasajänniteyhteys on rakennettu 20 kV sähkönjakeluverkon ja neljän kuluttajan välille. 20 kV keskijännite suunnataan tasamuuntamolla ±750 V pientasajännitteeksi ja uudestaan 400/230 V vaihtojännitteeksi kuluttajien läheisyydessä. Tämän kandidaatintyön tarkoituksena on luoda yliopistolle tietokanta pientasajännitesähköverkosta kertyvälle tiedolle ja mittaustuloksille. Tietokanta nähtiin tarpeelliseksi luoda, jotta pienjänniteverkon mittaustuloksia pystytään myöhemmin tarkastelemaan yhdessä ja yhtenäisessä muodossa. Yhdeksi tutkimuskysymykseksi muodostui, kuinka järjestää ja visualisoida kaikki verkosta palvelimille kertyvä mittausdata. Työssä on huomioitu myös kolme tietokantaa mahdollisesti hyödyntävää käyttäjäryhmää: kotitalousasiakkaat, sähköverkkoyhtiöt ja tutkimuslaboratorio, sekä pohdittu tietokannan hyötyä ja merkitystä näille käyttäjille. Toiseksi tutkimuskysymykseksi muodostuikin, mikä kaikesta tietokantaan talletetusta datasta olisi oleellisen tärkeää ottaa talteen näiden asiakkaiden kannalta, ja kuinka nämä voisivat hakea tietoa tietokannasta. Työn tutkimusmenetelmät perustuvat jo valmiiksi olemassa olevaan mittausdataan. Työtä varten on käytetty sekä painettua että sähköisessä muodossa olevaa kirjallisuutta. Työn tuloksena on saatu luotua tietokanta MySQL Workbench -ohjelmistolla, sekä mittausdatan keräys- ja käsittelyohjelmat Python-ohjelmointikielellä. Lisäksi on luotu erillinen MATLAB-rajapinta tiedon visualisoimista varten, jolla havainnollistetaan kolmen asiakasryhmän mittausdataa. Tietokanta ja sen tiedon visualisointi antavat kuluttajalle mahdollisuuden ymmärtää paremmin omaa sähkönkäyttöään, sekä sähköverkkoyhtiöille ja tutkimuslaboratorioille muun muassa tietoa sähkön laadusta ja verkon kuormituksesta.
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
Dataloggerit ovat tärkeitä mittaustekniikassa käytettäviä mittalaitteita, joiden tarkoituksena on kerätä talteen mittausdataa pitkiltä aikaväleiltä. Dataloggereita voidaan käyttää esimerkiksi teollista prosessia osana olevien toimilaitteiden tai kotitalouden energiajärjestelmän seurannassa. Teollisen luokan dataloggerit ovat yleensä hinnaltaan satojen tai tuhansien eurojen luokkaa. Työssä pyrittiin löytämään teollisen luokan laitteille halpa ja helppokäyttöinen vaihtoehto, joka on kuitenkin riittävän tehokas ja toimiva. Työssä suunniteltiin ja toteutettiin dataloggeri Raspberry Pi-alustalle ja testattiin sitä oikeaa teollista ympäristöä vastaavissa olosuhteissa. Kirjallisuudesta ja internet artikkeleista etsittiin samankaltaisia laite- ja ohjelmistoratkaisuja ja niitä käytettiin dataloggausjärjestelmän pohjana. Raspberry Pi-alustalle koodattiin yksinkertainen Python-kielinen data-loggausohjelma, joka käyttää Modbus-tiedonsiirtoprotokollaa. Testien perusteella voidaan todeta, että toteutettu dataloggeri on toimiva ja kykenee kaupallisten dataloggereiden tasoiseen mittaukseen ainakin pienillä näytteistystaajuuksilla. Toteutettu dataloggeri on myös huomattavasti kaupallisia dataloggereita halvempi. Helppokäyttöisyyden näkökulmasta dataloggerissa havaittiin puutteita, joita käydään läpi jatkokehitysideoiden muodossa.
Resumo:
CORBA (Common Object Request Broker Architecture) on laajalle levinnyt ja teollisuudessa yleisesti käytetty hajautetun tietojenkäsittelyn arkkitehtuuri. CORBA skaalautuu eri kokoisiin tarpeisiin ja sitä voidaan hyödynntää myös sulautetuissa langattomissa laitteissa. Oleellista sulautetussa ympäristössä on rakentaa rajapinnat kevytrakenteisiksi, pysyviksi ja helposti laajennettaviksi ilman että yhteensopivuus aikaisempiin rajapintoihin olisi vaarassa. Langattomissa laitteissa resurssit, kuten muistin määrä ja prosessointiteho, ovat hyvin rajalliset, joten rajapinta tulee suunnitella ja toteuttaa optimaalisesti. Palveluiden tulee ottaa huomioon myös langattomuuden rajoitukset, kuten hitaat tiedonsiirtonopeudet ja tiedonsiirron yhteydettömän luonteen. Työssä suunniteltiin ja toteutettiin CORBA-rajapinta GSM-päätelaitteeseen, jonka on todettu täyttävän sille asetetut tavoitteet. Rajapinta tarjoaa kaikki yleisimmät GSM-terminaalin ominaisuudet ja on laajennettavissa tulevia tuotteita ja verkkotekniikoita varten. Laajennettavuutta saavutetaan esimerkiksi kuvaamalla terminaalin ominaisuudet yleisellä kuvauskielellä, kuten XML:lla (Extensible Markup Language).
Resumo:
Given the structural and acoustical similarities between speech and music, and possible overlapping cerebral structures in speech and music processing, a possible relationship between musical aptitude and linguistic abilities, especially in terms of second language pronunciation skills, was investigated. Moreover, the laterality effect of the mother tongue was examined with both adults and children by means of dichotic listening scores. Finally, two event-related potential studies sought to reveal whether children with advanced second language pronunciation skills and higher general musical aptitude differed from children with less-advanced pronunciation skills and less musical aptitude in accuracy when preattentively processing mistuned triads and music / speech sound durations. The results showed a significant relationship between musical aptitude, English language pronunciation skills, chord discrimination ability, and sound-change-evoked brain activation in response to musical stimuli (durational differences and triad contrasts). Regular music practice may also have a modulatory effect on the brain’s linguistic organization and cause altered hemispheric functioning in those who have regularly practised music for years. Based on the present results, it is proposed that language skills, both in production and discrimination, are interconnected with perceptual musical skills.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Middle ear infections (acute otitis media, AOM) are among the most common infectious diseases in childhood, their incidence being greatest at the age of 6–12 months. Approximately 10–30% of children undergo repetitive periods of AOM, referred to as recurrent acute otitis media (RAOM). Middle ear fluid during an AOM episode causes, on average, 20–30 dB of hearing loss lasting from a few days to as much as a couple of months. It is well known that even a mild permanent hearing loss has an effect on language development but so far there is no consensus regarding the consequences of RAOM on childhood language acquisition. The results of studies on middle ear infections and language development have been partly discrepant and the exact effects of RAOM on the developing central auditory nervous system are as yet unknown. This thesis aims to examine central auditory processing and speech production among 2-year-old children with RAOM. Event-related potentials (ERPs) extracted from electroencephalography can be used to objectively investigate the functioning of the central auditory nervous system. For the first time this thesis has utilized auditory ERPs to study sound encoding and preattentive auditory discrimination of speech stimuli, and neural mechanisms of involuntary auditory attention in children with RAOM. Furthermore, the level of phonological development was studied by investigating the number and the quality of consonants produced by these children. Acquisition of consonant phonemes, which are harder to hear than vowels, is a good indicator of the ability to form accurate memory representations of ambient language and has not been studied previously in Finnish-speaking children with RAOM. The results showed that the cortical sound encoding was intact but the preattentive auditory discrimination of multiple speech sound features was atypical in those children with RAOM. Furthermore, their neural mechanisms of auditory attention differed from those of their peers, thus indicating that children with RAOM are atypically sensitive to novel but meaningless sounds. The children with RAOM also produced fewer consonants than their controls. Noticeably, they had a delay in the acquisition of word-medial consonants and the Finnish phoneme /s/, which is acoustically challenging to perceive compared to the other Finnish phonemes. The findings indicate the immaturity of central auditory processing in the children with RAOM, and this might also emerge in speech production. This thesis also showed that the effects of RAOM on central auditory processing are long-lasting because the children had healthy ears at the time of the study. An effective neural network for speech sound processing is a basic requisite of language acquisition, and RAOM in early childhood should be considered as a risk factor for language development.