888 resultados para Web modelling methods
Resumo:
Tämän tutkimuksen tavoitteena oli tutkia langattomien internet palveluiden arvoverkkoa ja liiketoimintamalleja. Tutkimus oli luonteeltaan kvalitatiivinen ja siinä käytettiin strategiana konstruktiivista case-tutkimusta. Esimerkkipalveluna oli Treasure Hunters matkapuhelinpeli. Tutkimus muodostui teoreettisesta ja empiirisestä osasta. Teoriaosassa liitettiin innovaatio, liiketoimintamallit ja arvoverkko käsitteellisesti toisiinsa, sekä luotiin perusta liiketoimintamallien kehittämiselle. Empiirisessä osassa keskityttiin ensin liiketoimintamallien luomiseen kehitettyjen innovaatioiden pohjalta. Lopuksi pyrittiin määrittämään arvoverkko palvelun toteuttamiseksi. Tutkimusmenetelminä käytettiin innovaatiosessiota, haastatteluja ja lomakekyselyä. Tulosten pohjalta muodostettiin useita liiketoimintakonsepteja sekä kuvaus arvoverkon perusmallista langattomille peleille. Loppupäätelmänä todettiin että langattomat palvelut vaativat toteutuakseen useista toimijoista koostuvan arvoverkon.
Resumo:
Rare diseases are typically chronic medical conditions of genetic etiology characterized by low prevalence and high complexity. Patients living with rare diseases face numerous physical, psychosocial and economic challenges that place them in the realm of health disparities. Congenital hypogonadotropic hypogonadism (CHH) is a rare endocrine disorder characterized by absent puberty and infertility. Little is known about the psychosocial impact of CHH on patients or their adherence to available treatments. This project aimed to examine the relationship between illness perceptions, depressive symptoms and adherence to treatment in men with CHH using the nursing-sensitive Health Promotion Model (HPM). A community based participatory research (CBPR) framework was employed as a model for empowering patients and overcoming health inequities. The study design used a sequential, explanatory mixed-methods approach. To reach dispersed CHH men, we used web-based recruitment and data collection (online survey). Subsequently, three patient focus groups were conducted to provide explanatory insights into the online survey (i.e. barriers to adherence, challenges of CHH, and coping/support) The online survey (n=101) revealed that CHH men struggle with adherence and often have long gaps in care (40% >1 year). They experience negative psychosocial consequences because of CHH and exhibit significantly increased rates of depression (p<0.001). Focus group participants (n=26) identified healthcare system, interpersonal, and personal factors as barriers to adherence. Further, CHH impacts quality of life and impedes psychosexual development in these men. The CHH men are active internet users who rely on the web forcrowdsourcing solutions and peer-to-peer support. Moreover, they are receptive to web-based interventions to address unmet health needs. This thesis contributes to nursing knowledge in several ways. First, it demonstrates the utility of the HPM as a valuable theoretical construct for understanding medication adherence and for assessing rare disease patients. Second, these data identify a range of unmet health needs that are targets for patient-centered interventions. Third, leveraging technology (high-tech) effectively extended the reach of nursing care while the CBPR approach and focus groups (high-touch) served as concurrent nursing interventions facilitating patient empowerment in overcoming health disparities. Last, these findings hold promise for developing e-health interventions to bridge identified shortfalls in care and activating patients for enhanced self- care and wellness -- Les maladies rares sont généralement de maladies chroniques d'étiologie génétique caractérisées par une faible prévalence et une haute complexité de traitement. Les patients atteints de maladies rares sont confrontés à de nombreux défis physiques, psychosociaux et économiques qui les placent dans une posture de disparité et d'inégalités en santé. L'hypogonadisme hypogonadotrope congénital (CHH) est un trouble endocrinien rare caractérisé par l'absence de puberté et l'infertilité. On sait peu de choses sur l'impact psychosocial du CHH sur les patients ou leur adhésion aux traitements disponibles. Ce projet vise à examiner la relation entre la perception de la maladie, les symptômes dépressifs et l'observance du traitement chez les hommes souffrant de CHH. Cette étude est modélisée à l'aide du modèle de la Promotion de la santé de Pender (HPM). Le cadre de l'approche communautaire de recherche participative (CBPR) a aussi été utilisé. La conception de l'étude a reposé sur une approche mixte séquentielle. Pour atteindre les hommes souffrant de CHH, un recrutement et une collecte de données ont été organisées électroniquement. Par la suite, trois groupes de discussion ont été menées avec des patients experts impliqués au sein d'organisations reliés aux maladies rares. Ils ont été invités à discuter certains éléments additionnels dont, les obstacles à l'adhésion au traitement, les défis généraux de vivre avec un CHH, et l'adaptation à la maladie en tenant compte du soutien disponible. Le sondage en ligne (n = 101) a révélé que les hommes souffrant de CHH ont souvent de longues périodes en rupture de soins (40% > 1 an). Ils vivent des conséquences psychosociales négatives en raison du CHH et présentent une augmentation significative des taux de dépression (p <0,001). Les participants aux groupes de discussion (n = 26) identifient dans l'ordre, les systèmes de soins de santé, les relations interpersonnelles, et des facteurs personnels comme des obstacles à l'adhésion. En outre, selon les participants, le CHH impacte négativement sur leur qualité de vie générale et entrave leur développement psychosexuel. Les hommes souffrant de CHH se considèrent être des utilisateurs actifs d'internet et comptent sur le web pour trouver des solutions pour trouver des ressources et y recherchent le soutien de leurs pairs (peer-to-peer support). En outre, ils se disent réceptifs à des interventions qui sont basées sur le web pour répondre aux besoins de santé non satisfaits. Cette thèse contribue à la connaissance des soins infirmiers de plusieurs façons. Tout d'abord, elle démontre l'utilité de la HPM comme une construction théorique utile pour comprendre l'adhésion aux traitements et pour l'évaluation des éléments de promotion de santé qui concernent les patients atteints de maladies rares. Deuxièmement, ces données identifient une gamme de besoins de santé non satisfaits qui sont des cibles pour des interventions infirmières centrées sur le patient. Troisièmement, méthodologiquement parlant, cette étude démontre que les méthodes mixtes sont appropriées aux études en soins infirmiers car elles allient les nouvelles technologies qui peuvent effectivement étendre la portée des soins infirmiers (« high-tech »), et l'approche CBPR par des groupes de discussion (« high-touch ») qui ont facilité la compréhension des difficultés que doivent surmonter les hommes souffrant de CHH pour diminuer les disparités en santé et augmenter leur responsabilisation dans la gestion de la maladie rare. Enfin, ces résultats sont prometteurs pour développer des interventions e-santé susceptibles de combler les lacunes dans les soins et l'autonomisation de patients pour une meilleure emprise sur les auto-soins et le bien-être.
Resumo:
Tänä päivänä tiedon nopea saatavuus ja hyvä hallittavuus ovat liiketoiminnan avainasioita. Tämän takia nykyisiä tietojärjestelmiä pyritään integroimaan. Integraatio asettaa monenlaisia vaatimuksia, jolloin sopivan integraatiomenetelmän ja -teknologian valitsemiseen pitää paneutua huolella. Integraatiototeutuksessa tulisi pyrkiä ns. löyhään sidokseen, jonka avulla voidaan saavuttaa aika-, paikka- ja alustariippumattomuus. Tällöin integraation eri osapuolien väliset oletukset saadaan karsittua minimiin, jonka myötä integraation hallittavuus ja vikasietoisuus paranee. Tässä diplomityössä keskitytään tutkimaan nykyisin teollisuuden käytössä olevien integraatiomenetelmien ja -teknologioiden ominaisuuksia, etuja ja haittoja. Lisäksi työssä tutustutaan Web-palvelutekniikkaan ja toteutetaan asynkroninen tiedonkopiointisovellus ko. teknologian avulla. Web-palvelutekniikka on vielä kehittyvä palvelukeskeinen teknologia, jolla pyritään voittamaan monet aiempia teknologioita vaivanneet ongelmat. Yhtenä teknologian päätavoitteista on luoda löyhä sidos integroitavien osapuolien välille ja mahdollistaa toiminta heterogeenisessa ympäristössä. Teknologiaa vaivaa kuitenkin vielä standardien puute esimerkiksi tietoturva-asioissa sekä päällekkäisten standardien kehitys eri valmistajien toimesta. Jotta teknologia voi yleistyä, on nämä ongelmat pystyttävä ratkaisemaan.
Resumo:
Suorituskyky- ja kuormitustestien tekeminen sovelluksille on erittäin tärkeä osa tuotantoprosessia nykypäivänä. Myös Web-sovelluksia testataan yhä enemmän. Tarve suorituskyky- ja kuormitustestien tekemiselle on selvä. Testattavan ympäristön tämänhetkinen, mutta myös tulevaisuuden toimivuus taataan oikein tehdyillä testeillä ja niitä seuraavilla korjaustoimenpiteillä. Suurten käyttäjämäärien testaaminen manuaalisesti on kuitenkin hyvin vaikeaa. Sirpaleisen ympäristön, kuten palveluihin perustuvien Web-sovellusympäristöjen testaaminen on haaste. Tämän työn aiheena on arvioida työkaluja ja menetelmiä, joilla raskaita teollisia Web-sovelluksia voidaan testata. Tavoitteena on löytää testausmenetelmiä, joilla voidaan luotettavasti simuloida suuria käyttäjämääriä. Tavoitteena on myös arvioida erilaisten yhteyksien ja protokollien vaikutusta Web-sovelluksen suorituskykyyn.
Resumo:
Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.
Resumo:
Huolimatta korkeasta automaatioasteesta sorvausteollisuudessa, muutama keskeinen ongelma estää sorvauksen täydellisen automatisoinnin. Yksi näistä ongelmista on työkalun kuluminen. Tämä työ keskittyy toteuttamaan automaattisen järjestelmän kulumisen, erityisesti viistekulumisen, mittaukseen konenäön avulla. Kulumisen mittausjärjestelmä poistaa manuaalisen mittauksen tarpeen ja minimoi ajan, joka käytetään työkalun kulumisen mittaukseen. Mittauksen lisäksi tutkitaan kulumisen mallinnusta sekä ennustamista. Automaattinen mittausjärjestelmä sijoitettiin sorvin sisälle ja järjestelmä integroitiin onnistuneesti ulkopuolisten järjestelmien kanssa. Tehdyt kokeet osoittivat, että mittausjärjestelmä kykenee mittaamaan työkalun kulumisen järjestelmän oikeassa ympäristössä. Mittausjärjestelmä pystyy myös kestämään häiriöitä, jotka ovat konenäköjärjestelmille yleisiä. Työkalun kulumista mallinnusta tutkittiin useilla eri menetelmillä. Näihin kuuluivat muiden muassa neuroverkot ja tukivektoriregressio. Kokeet osoittivat, että tutkitut mallit pystyivät ennustamaan työkalun kulumisasteen käytetyn ajan perusteella. Parhaan tuloksen antoivat neuroverkot Bayesiläisellä regularisoinnilla.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
Mountain regions worldwide are particularly sensitive to on-going climate change. Specifically in the Alps in Switzerland, the temperature has increased twice as fast than in the rest of the Northern hemisphere. Water temperature closely follows the annual air temperature cycle, severely impacting streams and freshwater ecosystems. In the last 20 years, brown trout (Salmo trutta L) catch has declined by approximately 40-50% in many rivers in Switzerland. Increasing water temperature has been suggested as one of the most likely cause of this decline. Temperature has a direct effect on trout population dynamics through developmental and disease control but can also indirectly impact dynamics via food-web interactions such as resource availability. We developed a spatially explicit modelling framework that allows spatial and temporal projections of trout biomass using the Aare river catchment as a model system, in order to assess the spatial and seasonal patterns of trout biomass variation. Given that biomass has a seasonal variation depending on trout life history stage, we developed seasonal biomass variation models for three periods of the year (Autumn-Winter, Spring and Summer). Because stream water temperature is a critical parameter for brown trout development, we first calibrated a model to predict water temperature as a function of air temperature to be able to further apply climate change scenarios. We then built a model of trout biomass variation by linking water temperature to trout biomass measurements collected by electro-fishing in 21 stations from 2009 to 2011. The different modelling components of our framework had overall a good predictive ability and we could show a seasonal effect of water temperature affecting trout biomass variation. Our statistical framework uses a minimum set of input variables that make it easily transferable to other study areas or fish species but could be improved by including effects of the biotic environment and the evolution of demographical parameters over time. However, our framework still remains informative to spatially highlight where potential changes of water temperature could affect trout biomass. (C) 2015 Elsevier B.V. All rights reserved.-
Resumo:
1. Species distribution models (SDMs) have become a standard tool in ecology and applied conservation biology. Modelling rare and threatened species is particularly important for conservation purposes. However, modelling rare species is difficult because the combination of few occurrences and many predictor variables easily leads to model overfitting. A new strategy using ensembles of small models was recently developed in an attempt to overcome this limitation of rare species modelling and has been tested successfully for only a single species so far. Here, we aim to test the approach more comprehensively on a large number of species including a transferability assessment. 2. For each species numerous small (here bivariate) models were calibrated, evaluated and averaged to an ensemble weighted by AUC scores. These 'ensembles of small models' (ESMs) were compared to standard Species Distribution Models (SDMs) using three commonly used modelling techniques (GLM, GBM, Maxent) and their ensemble prediction. We tested 107 rare and under-sampled plant species of conservation concern in Switzerland. 3. We show that ESMs performed significantly better than standard SDMs. The rarer the species, the more pronounced the effects were. ESMs were also superior to standard SDMs and their ensemble when they were independently evaluated using a transferability assessment. 4. By averaging simple small models to an ensemble, ESMs avoid overfitting without losing explanatory power through reducing the number of predictor variables. They further improve the reliability of species distribution models, especially for rare species, and thus help to overcome limitations of modelling rare species.
Resumo:
Background: Information about the composition of regulatory regions is of great value for designing experiments to functionally characterize gene expression. The multiplicity of available applications to predict transcription factor binding sites in a particular locus contrasts with the substantial computational expertise that is demanded to manipulate them, which may constitute a potential barrier for the experimental community. Results: CBS (Conserved regulatory Binding Sites, http://compfly.bio.ub.es/CBS) is a public platform of evolutionarily conserved binding sites and enhancers predicted in multiple Drosophila genomes that is furnished with published chromatin signatures associated to transcriptionally active regions and other experimental sources of information. The rapid access to this novel body of knowledge through a user-friendly web interface enables non-expert users to identify the binding sequences available for any particular gene, transcription factor, or genome region. Conclusions: The CBS platform is a powerful resource that provides tools for data mining individual sequences and groups of co-expressed genes with epigenomics information to conduct regulatory screenings in Drosophila.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This paper presents the current state and development of a prototype web-GIS (Geographic Information System) decision support platform intended for application in natural hazards and risk management, mainly for floods and landslides. This web platform uses open-source geospatial software and technologies, particularly the Boundless (formerly OpenGeo) framework and its client side software development kit (SDK). The main purpose of the platform is to assist the experts and stakeholders in the decision-making process for evaluation and selection of different risk management strategies through an interactive participation approach, integrating web-GIS interface with decision support tool based on a compromise programming approach. The access rights and functionality of the platform are varied depending on the roles and responsibilities of stakeholders in managing the risk. The application of the prototype platform is demonstrated based on an example case study site: Malborghetto Valbruna municipality of North-Eastern Italy where flash floods and landslides are frequent with major events having occurred in 2003. The preliminary feedback collected from the stakeholders in the region is discussed to understand the perspectives of stakeholders on the proposed prototype platform.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
Ecological studies on food webs rarely include parasites, partly due to the complexity and dimensionality of host-parasite interaction networks. Multiple co-occurring parasites can show different feeding strategies and thus lead to complex and cryptic trophic relationships, which are often difficult to disentangle by traditional methods. We analyzed stable isotope ratios of C (13C/12C, δ13C) and N (15N/14N, δ15N) of host and ectoparasite tissues to investigate trophic structure in 4 co-occurring ectoparasites: three lice and one flea species, on two closely related and spatially segregated seabird hosts (Calonectris shearwaters). δ13C isotopic signatures confirmed feathers as the main food resource for the three lice species and blood for the flea species. All ectoparasite species showed a significant enrichment in δ15N relatively to the host tissue consumed (discrimination factors ranged from 2 to 5 depending on the species). Isotopic differences were consistent across multiple host-ectoparasite locations, despite of some geographic variability in baseline isotopic levels. Our findings illustrate the influence of both ectoparasite and host trophic ecology in the isotopic structuring of the Calonectris ectoparasite community. This study highlights the potential of stable isotope analyses in disentangling the nature and complexity of trophic relationships in symbiotic systems.