885 resultados para fuzzy based evaluation method
Resumo:
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
Testaustapausten valitseminen on testauksessa tärkeää, koska kaikkia testaustapauksia ei voida testata aika- ja raharajoitteiden takia. Testaustapausten valintaan on paljon eri menetelmiä joista eniten esillä olevat ovat malleihin perustuva valinta, kombinaatiovalinta ja riskeihin perustuva valinta. Kaikkiin edellä mainittuihin menetelmiin testaustapaukset luodaan ohjelman spesifikaation perusteella. Malleihin perustuvassa menetelmässä käytetään hyväksi ohjelman toiminnasta olevia malleja, joista valitaan tärkeimmät testattavaksi. Kombinaatiotestauksessa testitapaukset on muodostettu ominaisuuspareina jolloin yhden parin testaamisesta päätellään kahden ominaisuuden toiminta. Kombinaatiotestaus on tehokas löytämään virheitä, jotka johtuvat yhdestä tai kahdesta tekijästä. Riskeihin perustuva testaus pyrkii arvioimaan ohjelman riskejä ja valitsemaan testitapaukset niiden perusteella. Kaikissa menetelmissä priorisointi on tärkeässä roolissa, jotta testauksesta saadaan riittävä luotettavuus ilman kustannusten nousua.
Resumo:
Paperin tärkeiden teknisten ominaisuuksien lisäksi myös paperin aistinvaraiset ominaisuudet ovat nousseet merkittäviksi parametreiksi paperia luonnehdittaessa. Aistinvaraisilla ominaisuuksilla tarkoitetaan ominaisuuksia, jotka ihminen aistii käsitellessään tuotetta. Tällaisia ominaisuuksia ovat esimerkiksi paperin karheus, liukkaus, jäykkyys sekä ääni paperia selattaessa. Paperin aistinvaraiset ominaisuudet luovat lukijalle mielikuvan lukemastaan lehdestä lehden sisällön lisäksi. Tämän työn tavoitteena oli kehittää olemassa olevan aistinvaraisten ominaisuuksien arviointiraadin toimintaa. Arviointimenetelmän tilalle pyrittiin löytämään toinen menetelmä sekä kehittämään uusi tulosten raportointimalli. Työssä käytettiin kahta subjektiivista arviointimenetelmää, parivertailua ja ranking-menetelmää. Tuloksia verrattiin aiemmin käytössä olleen referenssimenetelmän tuloksiin. Näytteistä arvioitiin karheus, liukkaus, tahmeus, jäykkyys, selailtavuus, äänen voimakkuus ja äänen laatu. Näiden lisäksi näytteiden miellyttävyyttä arvioitiin parivertailua käyttäen. Arvioitsijoiden yksimielisyyttä selvitettiin parivertailun yhteydessä. Näytteet olivat painamattomia, mutta painokoneen läpi menneitä lehtiformaattiin taitettuja. Visuaalisissa arvioinneissa käytettiin painettuja näytteitä samasta paperivalikoimasta. Arviointimenetelmien tuloksia vertailtaessa, voidaan menetelmien välillä havaita muutamia eroja. Sekä parivertailussa että ranking-menetelmässä näytteet jakaantuivat lähes kokonaan annetulle arviointiskaalalle, kun referenssimenetelmällä ne kasautuivat hyvin pienelle alueelle. Ranking-menetelmässä näytteet jakautuivat vielä laajemmalle kuin parivertailussa. Parivertailu erotteli näytteet paremmin toisistaan kuin referenssimenetelmä. Ranking-menetelmän ja parivertailun välillä vastaavaa eroa erotuskyvyssä ei havaittu. Tulosten perusteella voidaan sanoa, että parivertailu
Resumo:
The computer is a useful tool in the teaching of upper secondary school physics, and should not have a subordinate role in students' learning process. However, computers and computer-based tools are often not available when they could serve their purpose best in the ongoing teaching. Another problem is the fact that commercially available tools are not usable in the way the teacher wants. The aim of this thesis was to try out a novel teaching scenario in a complicated subject in physics, electrodynamics. The didactic engineering of the thesis consisted of developing a computer-based simulation and training material, implementing the tool in physics teaching and investigating its effectiveness in the learning process. The design-based research method, didactic engineering (Artigue, 1994), which is based on the theoryof didactical situations (Brousseau, 1997), was used as a frame of reference for the design of this type of teaching product. In designing the simulation tool a general spreadsheet program was used. The design was based on parallel, dynamic representations of the physics behind the function of an AC series circuit in both graphical and numerical form. The tool, which was furnished with possibilities to control the representations in an interactive way, was hypothesized to activate the students and promote the effectiveness of their learning. An effect variable was constructed in order to measure the students' and teachers' conceptions of learning effectiveness. The empirical study was twofold. Twelve physics students, who attended a course in electrodynamics in an upper secondary school, participated in a class experiment with the computer-based tool implemented in three modes of didactical situations: practice, concept introduction and assessment. The main goal of the didactical situations was to have students solve problems and study the function of AC series circuits, taking responsibility for theirown learning process. In the teacher study eighteen Swedish speaking physics teachers evaluated the didactic potential of the computer-based tool and the accompanying paper-based material without using them in their physics teaching. Quantitative and qualitative data were collected using questionnaires, observations and interviews. The result of the studies showed that both the group of students and the teachers had generally positive conceptions of learning effectiveness. The students' conceptions were more positive in the practice situation than in the concept introduction situation, a setting that was more explorative. However, it turned out that the students' conceptions were also positive in the more complex assessment situation. This had not been hypothesized. A deeper analysis of data from observations and interviews showed that one of the students in each pair was more active than the other, taking more initiative and more responsibilityfor the student-student and student-computer interaction. These active studentshad strong, positive conceptions of learning effectiveness in each of the threedidactical situations. The group of less active students had a weak but positive conception in the first iv two situations, but a negative conception in the assessment situation, thus corroborating the hypothesis ad hoc. The teacher study revealed that computers were seldom used in physics teaching and that computer programs were in short supply. The use of a computer was considered time-consuming. As long as physics teaching with computer-based tools has to take place in special computer rooms, the use of such tools will remain limited. The affordance is enhanced when the physical dimensions as well as the performance of the computer are optimised. As a consequence, the computer then becomes a real learning tool for each pair of students, smoothly integrated into the ongoing teaching in the same space where teaching normally takes place. With more interactive support from the teacher, the computer-based parallel, dynamic representations will be efficient in promoting the learning process of the students with focus on qualitative reasoning - an often neglected part of the learning process of the students in upper secondary school physics.
Resumo:
Perinteisesti suuri osa yritysten hankkimista palveluista on laskutettu kuluneiden työtuntien mukaan tuntiperusteina laskutuksena. Yleistymässä on myös palveluiden tarjonta suoritteina, joilla on etukäteen määritelty sisältö ja hinta. Tämän tutkimuksen tavoitteena oli selvittää näiden hinnoittelumallien välisiä eroja kokonaiskustannusten näkökulmasta. Tutkimukseen kerättiin tietoja kohdeyrityksen palveluhankinnoista; prosessiputkistojen korjaus- ja muutostyöt sekä sähkömoottorien huollot. Laskutietojen lisäksi yksittäisistä töistä saatiin käyttöön myös mm. työtuntimäärät. Palvelutoimittajien hinnastoja hyödyntäen luotiin laskennallinen vertailuhinta-aineisto vaihtoehtoisella hinnoittelumallilla. Muut kokonaiskustannuksiin vaikuttavat elementit arvioitiin kokemukseen perustuen. Tutkimuksen mukaan suoritepohjainen hinnoittelu näyttää sopivan samankaltaisina toistuviin hankintoihin kuten sähkömoottorihuoltoihin. Putkistotöiden hankinnassa sen sijaan vaativan työn sisällön etukäteen määrittelyn ja lisätöiden takia tuntiperusteinen hinnoittelu arvioitiin kokonaiskustannusiltaan edullisemmaksi. Hinnoittelumallien välinen paremmuus siis riippuu hankittavan palvelun sisällöstä.
Resumo:
Centrifugal pumps are a notable end-consumer of electrical energy. Typical application of a centrifugal pump is the filling or emptying of a reservoir tank, where the pump is often operated at a constant speed until the process is completed. Installing a frequency converter to control the motor substitutes the traditional fixed-speed pumping system, allows the optimization of rotational speed profile for the pumping tasks and enables the estimation of rotational speed and shaft torque of an induction motor without any additional measurements from the motor shaft. Utilization of variable-speed operation provides the possibility to decrease the overall energy consumption of the pumping task. The static head of the pumping process may change during the pumping task. In such systems, the minimum rotational speed changes during reservoir filling or emptying, and the minimum energy consumption can’t be achieved with a fixed rotational speed. This thesis presents embedded algorithms to automatically identify, optimize and monitor pumping processes between supply and destination reservoirs, and evaluates the changing static head –based optimization method.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
An ERP system investment analysis method using a Fuzzy Pay-Off approach for Real Option valuation is examined. It is studied, how the investment can be incrementally adopted and analyzed as a compounding Real Option model. The modeling allows follow-up. IS system development model COCOMO is presented as an example for investment analysis. The thesis presents the usage of Real Options as an alternative for the valuation of an investment. An idea is presented to use a continuous investment follow-up during the investment. This analysis can be performed using Real Options. As a tool for the analysis, the Fuzzy Pay-Off method is presented as an alternative for investment valuation.
Resumo:
The purpose of this thesis is to study, investigate and compare usability of open source cms. The thesis examines and compares usability aspect of some open source cms. The research is divided into two complementary parts –theoretical part and analytical part. The theoretical part mainly describes open source web content management systems, usability and the evaluation methods. The analytical part is to compare and analyze the results found from the empirical research. Heuristic evaluation method was used to measure usability problems in the interfaces. The study is fairly limited in scope; six tasks were designed and implemented in each interface for discovering defects in the interfaces. Usability problems were rated according to their level of severity. Time it took by each task, level of problem’s severity and type of heuristics violated will be recorded, analyzed and compared. The results of this study indicate that the comparing systems provide usable interfaces, and WordPress is recognized as the most usable system.
Resumo:
Systemic iron overload (IO) is considered a principal determinant in the clinical outcome of different forms of IO and in allogeneic hematopoietic stem cell transplantation (alloSCT). However, indirect markers for iron do not provide exact quantification of iron burden, and the evidence of iron-induced adverse effects in hematological diseases has not been established. Hepatic iron concentration (HIC) has been found to represent systemic IO, which can be quantified safely with magnetic resonance imaging (MRI), based on enhanced transverse relaxation. The iron measurement methods by MRI are evolving. The aims of this study were to implement and optimise the methodology of non-invasive iron measurement with MRI to assess the degree and the role of IO in the patients. An MRI-based HIC method (M-HIC) and a transverse relaxation rate (R2*) from M-HIC images were validated. Thereafter, a transverse relaxation rate (R2) from spin-echo imaging was calibrated for IO assessment. Two analysis methods, visual grading and rSI, for a rapid IO grading from in-phase and out-of-phase images were introduced. Additionally, clinical iron indicators were evaluated. The degree of hepatic and cardiac iron in our study patients and IO as a prognostic factor in patients undergoing alloSCT were explored. In vivo and in vitro validations indicated that M-HIC and R2* are both accurate in the quantification of liver iron. R2 was a reliable method for HIC quantification and covered a wider HIC range than M-HIC and R2*. The grading of IO was able to be performed rapidly with the visual grading and rSI methods. Transfusion load was more accurate than plasma ferritin in predicting transfusional IO. In patients with hematological disorders, the prevalence of hepatic IO was frequent, opposite to cardiac IO. Patients with myelodysplastic syndrome were found to be the most susceptible to IO. Pre-transplant IO predicted severe infections during the early post-transplant period, in contrast to the reduced risk of graft-versus-host disease. Iron-induced, poor transplantation results are most likely to be mediated by severe infections.
Resumo:
The objective of this thesis was to form an understanding about the common gaps in learning from projects, as well as possible approaches to bridging them. In the research focus were the questions on how project teams create knowledge, which fac- tors affect the capture and re-use of this knowledge and how organizations can best capture and utilize this project-based knowledge. The method used was qualitative metasummary, a literature-based research method that has previously been mainly applied in the domains of nursing and health care research. The focus was laid on firms conducting knowledge-intensive business in some form of matrix organization. The research produced a theoretical model of knowledge creation in projects as well as a typology of factors affecting transfer of project-based knowledge. These include experience, culture and leadership, planning and controlling, relationships, project review and documentation. From these factors, suggestions could be derived as to how organizations should conduct projects in order not to lose what has been learned.
Resumo:
Tämä työ vastaa tarpeeseen hallita korkeapainevesisumusuuttimen laatua virtausmekaniikan työkalujen avulla. Työssä tutkitaan suutinten testidatan lisäksi virtauksen käyttäytymistä suuttimen sisällä CFD-laskennan avulla. Virtausmallinnus tehdään Navier-Stokes –pohjaisella laskentamenetelmällä. Työn teoriaosassa käsitellään virtaustekniikkaa ja sen kehitystä yleisesti. Lisäksi esitetään suuttimen laskennassa käytettävää perusteoriaa sekä teknisiä ratkaisuja. Teoriaosassa käydään myös läpi laskennalliseen virtausmekaniikkaan (CFD-laskenta) liittyvää perusteoriaa. Tutkimusosiossa esitetään käsitellyt suutintestitulokset sekä mallinnetaan suutinvirtausta ajasta riippumattomaan virtauslaskentaan perustuvalla laskentamenetelmällä. Virtauslaskennassa käytetään OpenFOAM-laskentaohjelmiston SIMPLE-virtausratkaisijaa sekä k-omega SST –turbulenssimallia. Tehtiin virtausmallinnus kaikilla paineilla, joita suuttimen testauksessa myös todellisuudessa käytetään. Lisäksi selvitettiin mahdolliset kavitaatiokohdat suuttimessa ja suunniteltiin kavitaatiota ehkäisevä suutingeometria. Todettiin myös lämpötilan ja epäpuhtauksien vaikuttavan kavitaatioon sekä mallinnettiin lämpötilan vaikutusta. Luotiin malli, jolla suuttimen suunnitteluun liittyviin haasteisiin voidaan vastata numeerisella laskennalla.
Resumo:
Ohjelmoinnin opettaminen yleissivistävänä oppiaineena on viime aikoina herättänyt kiinnostusta Suomessa ja muualla maailmassa. Esimerkiksi Suomen opetushallituksen määrittämien, vuonna 2016 käyttöön otettavien peruskoulun opintosuunnitelman perusteiden mukaan, ohjelmointitaitoja aletaan opettaa suomalaisissa peruskouluissa ensimmäiseltä luokalta alkaen. Ohjelmointia ei olla lisäämässä omaksi oppiaineekseen, vaan sen opetuksen on tarkoitus tapahtua muiden oppiaineiden, kuten matematiikan yhteydessä. Tämä tutkimus käsittelee yleissivistävää ohjelmoinnin opetusta yleisesti, käy läpi yleisimpiä haasteita ohjelmoinnin oppimisessa ja tarkastelee erilaisten opetusmenetelmien soveltuvuutta erityisesti nuorten oppilaiden opettamiseen. Tutkimusta varten toteutettiin verkkoympäristössä toimiva, noin 9–12-vuotiaille oppilaille suunnattu graafista ohjelmointikieltä ja visuaalisuutta tehokkaasti hyödyntävä oppimissovellus. Oppimissovelluksen avulla toteutettiin alakoulun neljänsien luokkien kanssa vertailututkimus, jossa graafisella ohjelmointikielellä tapahtuvan opetuksen toimivuutta vertailtiin toiseen opetusmenetelmään, jossa oppilaat tutustuivat ohjelmoinnin perusteisiin toiminnallisten leikkien avulla. Vertailututkimuksessa kahden neljännen luokan oppilaat suorittivat samankaltaisia, ohjelmoinnin peruskäsitteisiin liittyviä ohjelmointitehtäviä molemmilla opetus-menetelmillä. Tutkimuksen tavoitteena oli selvittää alakouluoppilaiden nykyistä ohjelmointiosaamista, sitä minkälaisen vastaanoton ohjelmoinnin opetus alakouluoppilailta saa, onko erilaisilla opetusmenetelmillä merkitystä opetuksen toteutuksen kannalta ja näkyykö eri opetusmenetelmillä opetettujen luokkien oppimistuloksissa eroja. Oppilaat suhtautuivat kumpaankin opetusmenetelmään myönteisesti, ja osoittivat kiinnostusta ohjelmoinnin opiskeluun. Sisällöllisesti oppitunneille oli varattu turhan paljon materiaalia, mutta esimerkiksi yhden keskeisimmän aiheen, eli toiston käsitteen oppimisessa aktiivisilla leikeillä harjoitellut luokka osoitti huomattavasti graafisella ohjelmointikielellä harjoitellutta luokkaa parempaa osaamista oppitunnin jälkeen. Ohjelmakoodin peräkkäisyyteen liittyvä osaaminen oli neljäsluokkalaisilla hyvin hallussa jo ennen ohjelmointiharjoituksia. Aiheeseen liittyvän taustatutkimuksen ja luokkien opettajien haastatteluiden perusteella havaittiin koulujen valmiuksien opetussuunnitelmauudistuksen mukaiseen ohjelmoinnin opettamiseen olevan vielä heikolla tasolla.
Resumo:
Réalisé en codirection avec Karen C. Waldron et Dominic Rochefort.