922 resultados para Dominikanerinnenkloster Töss (Töss, Switzerland).


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This doctoral dissertation introduces an algorithm for constructing the most probable Bayesian network from data for small domains. The algorithm is used to show that a popular goodness criterion for the Bayesian networks has a severe sensitivity problem. The dissertation then proposes an information theoretic criterion that avoids the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we present and evaluate two pattern matching based methods for answer extraction in textual question answering systems. A textual question answering system is a system that seeks answers to natural language questions from unstructured text. Textual question answering systems are an important research problem because as the amount of natural language text in digital format grows all the time, the need for novel methods for pinpointing important knowledge from the vast textual databases becomes more and more urgent. We concentrate on developing methods for the automatic creation of answer extraction patterns. A new type of extraction pattern is developed also. The pattern matching based approach chosen is interesting because of its language and application independence. The answer extraction methods are developed in the framework of our own question answering system. Publicly available datasets in English are used as training and evaluation data for the methods. The techniques developed are based on the well known methods of sequence alignment and hierarchical clustering. The similarity metric used is based on edit distance. The main conclusions of the research are that answer extraction patterns consisting of the most important words of the question and of the following information extracted from the answer context: plain words, part-of-speech tags, punctuation marks and capitalization patterns, can be used in the answer extraction module of a question answering system. This type of patterns and the two new methods for generating answer extraction patterns provide average results when compared to those produced by other systems using the same dataset. However, most answer extraction methods in the question answering systems tested with the same dataset are both hand crafted and based on a system-specific and fine-grained question classification. The the new methods developed in this thesis require no manual creation of answer extraction patterns. As a source of knowledge, they require a dataset of sample questions and answers, as well as a set of text documents that contain answers to most of the questions. The question classification used in the training data is a standard one and provided already in the publicly available data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kieltenvälisessä tiedonhaussa hakija voi esittää kyselyn eri kielellä kuin millä dokumentit on kirjoitettu. Kyselyn ja dokumenttien täsmäytys voidaan toteuttaa esimerkiksi kääntämällä kysely dokumenttien kielille. Kyselyn kääntäminen sanakirjojen avulla on kieltenvälisessä tiedonhaussa yleisimmin käytetty menetelmä. Tässä työssä tutkitaan kyselyn kääntämistä monikielisen tesauruksen avulla kieltenvälisessä tiedonhaussa ja verrataan sitä sanakirjan avulla kääntämiseen ja yksikieliseen tiedonhakuun. Menetelmien vertailua varten tutkimuksessa muodostetaan englanninkielinen dokumenttikokoelma, johon tehdään hollanninkielisestä monikielisen EuroWordNet-tesauruksen tai hollanti-englanti-sanakirjan avulla englanninkieliseksi käännettyjä kyselyitä tai yksikielisiä englanninkielisiä kyselyitä käyttäen Indri-hakukonetta. Kaikista kyselyistä muodostetaan kaksi eri versiota, rakenteeton ja rakenteinen. Tutkimuksessa saatujen tulosten perusteella voidaan todeta, että tesaurusperusteinen kieltenvälinen tiedonhaku toimi vähintään yhtä hyvin kuin yleisimmin käytetty sanakirjaperusteinen kieltenvälinen tiedonhaku. Tesaurusperusteisessa kieltenvälisessä tiedonhaussa olisi mahdollista hyödyntää enemmän tesauruksen sisältämää tietoa sanojen välisistä suhteista.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Javan myötä ohjelmointikielten kääntämisprosessiin on uudelleen esitelty erityisen käsittelyn kohteeksi kelpaava välikieli, tavukoodi. Tavallisesti Java-ohjelmaa suoritettaessa erityinen virtuaalikone lataa tavukoodimuotoisen ohjelman esityksen, jota suoritetaan joko tulkkaamalla tai suoritusaikaisesti suoritusalustan ymmärtämälle kielelle kääntäen. Tässä tutkielmassa tutkitaan välikielen tasolla tapahtuvia optimointimahdollisuuksia. Oliokielten dynaamisen luonteen vuoksi puhtaasti staattinen optimointi on vaikeaa ja siksi usein hedelmätöntä. Tutkielman yhteydessä kuitenkin tunnistettiin mobiiliohjelmointiin soveltuva suljetun maailman oletus, jonka puitteissa tavukoodin tasolla voidaan ohjelmaa parannella turvallisesti. Esimerkkinä tutkielmassa toteutetaan ylimääräisiä rajapintaluokkia poistava optimointi. Koska optimointialgoritmit ovat usein monimutkaisia ja vaikeaselkoisia, tutkitaan työssä myös mahdollisuuksia niiden yksinkertaisempaan esittämiseen. Alunperin Javalla toteutetun luokkahierarkiaa uudelleenjärjestelevän algoritmin esiehtojen tarkastus onnistutaan kuvaamaan ensimmäisen kertaluokan logiikan kaavalla, jolloin esiehtojen tarkastus onnistuu tutkielman puitteissa toteutetulla logiikkakoneella. Logiikkakoneelle kuvataan logiikkakaavojen propositiot Javalla, mutta propositioiden yhdistely tapahtuu ja-konnektiiveja käyttävällä logiikkakielellä. Suorituskyvyltään logiikkakone on joissain tapauksissa Java-toteutusta nopeampi.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Usability testing is a productive and reliable method for evaluating the usability of software. Planning and implementing the test and analyzing its results is typically considered time-consuming, whereas applying usability methods in general is considered difficult. Because of this, usability testing is often priorized lower than more concrete issues in software engineering projects. Intranet Alma is a web service, users of which consist of students and personnel of the University of Helsinki. Alma was published in 2004 at the opening ceremony of the university. It has 45 000 users, and it replaces several former university network services. In this thesis, the usability of intranet Alma is evaluated with usability testing. The testing method applied has been lightened to make its taking into use as easy as possible. In the test, six students each tried to solve nine test tasks with Alma. As a result concrete usability problems were described in the final test report. Goal-orientation was given less importance in the applied usability testing. In addition, the system was tested only with test users from the largest user group. Usability test found general usability problems that occurred no matter the task or the user. However, further evaluation needs to be done: in addition to the general usability problems, there are task-dependent problems, solving of which requires thorough gathering of users goals. In the basic structure and central functionality of Alma, for example in navigation, there are serious and often repeating usability problems. It would be of interest to verify the designed user interface solutions to these problems before taking them into use. In the long run, the goals of the users, that the software is planned to support, are worth gathering, and the software development should be based on these goals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä työssä tutkitaan, miten vaatimusmäärittelydokumenttiin kirjatut vaatimukset kiinnittävät myöhemmissä ohjelmistokehitysprosessin vaiheissa tehtäviä käyttöliittymäratkaisuja. Lisäksi tutkitaan, jääkö käyttöliittymästä puuttumaan todellisissa käyttötilanteissa tarvittavia toimintoja, kun vaatimusmäärittely toteutetaan perinteisen vesiputousmallin mukaisesti. Tutkimuksessa tarkastellaan kahden Helsingin yliopiston ohjelmistotuotantoprojekti-kurssilla toteutetun opiskelijaprojektin tuotoksia. Tutkimuksessa selvitetään oppilaiden tuottamien käyttöliittymien keskeisimmät käyttöliittymäongelmat simulointitestaamalla käyttöliittymät. Testauksessa simuloitiin ohjelmiston kolme keskeisintä käyttötilannetta, jotka selvitettiin tekemällä kontekstuaalinen käyttäjähaastattelu yhdelle opettajatuutorille. Tämän jälkeen etsittiin, löytyvätkö ongelmien syyt ryhmien vaatimusmäärittelydokumentteihin kirjatuista käyttötapauskuvauksista tai muista vaatimuksista. Tämän työn keskeisimpinä tuloksina selvisi, että käyttötapaukset sitoivat aina toiminnon ja sen toteutuksen käyttöliittymässä, mutta vain pieni osa niistä kiinnitti käyttöliittymäratkaisuja haitallisesti. Vakavien tehokkuusongelmien ja järjestelmästä puuttuvan toiminnallisuuden syyt kuitenkin olivat nimenomaan vaatimusmäärittelydokumentin käyttötapauksissa. Muut vaatimukset kiinnittivät toimintoja niin korkealla tasolla, ettei niistä muodostunut ongelmallisia käyttöliittymäratkaisuja. Lisäksi havaittiin, että molemmista vaatimusmäärittelyistä oli jäänyt pois sellaisia toimintoja, joita oltaisiin tarvittu käyttötilanteen suorittamiseen tehokkaasti. Vaikuttaisi siltä, että vaatimusmäärittelyvaiheessa ei ole saatu selville käyttäjän todellisia käyttötilanteita, minkä seurauksena vaatimuksista on jäänyt pois oleellisia toimintoja.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ohjelmiston loppukäyttäjiä suoraviivaisesti tukevan toiminnallisuuden ja tietosisällön määrittely on haastavaa, ja muutokset ohjelmiston määrityksiin aiheuttavat usein merkittäviä lisäkustannuksia ohjelmistoprojekteissa. Vaatimusmäärittelyn kattavuutta voidaan parantaa kartoittamalla käyttäjien työtehtävät sekä suunnittelemalla valmis käyttöliittymäratkaisu jo määrittelyvaiheen alussa, jolloin määritellyn tietosisällön ja toiminnallisuuden toimivuus loppukäyttäjien työtehtävien yhteydessä voidaan testata käyttöliittymäprototyyppien avulla. Testauksen lisäksi käyttöliittymäratkaisun näyttökuvia voidaan käyttää asiakkaan ja toimittajan välisenä neuvottelu- ja sopimusvälineenä, jonka perusteella valmistuneen ohjelmiston käyttöliittymäratkaisu voidaan hyväksyä projektin päätteeksi. Käyttöliittymän testauksesta huolimatta käyttöliittymäratkaisuun kohdistuu projektin aikana vielä useita riskejä. Koska yksittäiset näyttökuvat eivät kuvaa kattavasti käyttöliittymän toimintalogiikkaa, käyttöliittymän toteuttaminen pelkkien näyttökuvien perusteella johtaisi helposti väärinymmärryksistä aiheutuviin hallitsemattomiin muutoksiin käyttöliittymäratkaisussa. Väärinymmärrysten välttämiseksi käyttöliittymän toimintalogiikka kannattaa kuvata erikseen käyttötilanteiden etenemistä esittävien näyttökuvasarjojen avulla. Näyttökuvasarjojen laatiminen on kuitenkin työlästä, ja yksinkertaisistakin käyttötilanteista syntyy usein pitkiä kuvasarjoja. Tässä työssä kehitettyjen sanallisten käyttösekvenssikuvausten avulla yksinkertaisten käyttötilanteiden toimintalogiikka on mahdollista dokumentoida näyttökuvasarjoja tiiviimmin. Lisäksi toteutustyötä voidaan helpottaa täydentämällä dokumentaatiota tässä työssä kehitetyillä komponenttikohtaisilla toimintalogiikan kuvauksilla. Tässä tutkielmassa arvioidaan määrittelyvaiheessa suunniteltuun käyttöliittymäratkaisuun projektin aikana kohdistuvia riskejä sekä riskien minimointia erityisesti käyttöliittymän dokumentoinnin avulla. Esimerkkitapauksena käytetään NCC Rakennus Oy:n tuntitietojen kirjausohjelmiston kehitysprojektia, jonka toteutus on tarkoitus tehdä osana monille toimialoille suunnattua WM-data Oy:n tuotekehitysprojektia. Aiheluokat (Computing Reviews 1998): D.2.1, H.5.2

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vastly increased popularity of the Internet as an effective publication and distribution channel of digital works has created serious challenges to enforcing intellectual property rights. Works are widely disseminated on the Internet, with and without permission. This thesis examines the current problems with licence management and copy protection and outlines a new method and system that solve these problems. The WARP system (Works, Authors, Royalties, and Payments) is based on global registration and transfer monitoring of digital works, and accounting and collection of Internet levy funded usage fees payable to the authors and right holders of the works. The detection and counting of downloads is implemented with origrams, short and original parts picked from the contents of the digital work. The origrams are used to create digests, digital fingerprints that identify the piece of work transmitted over the Internet without the need to embed ID tags or any other easily removable metadata in the file.