34 resultados para Data Mining and its Application

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objectives of this work were synthesizing an EDTA-β-CD adsorbent and investigating its adsorption potential and applications in preconcentration of REEs from aqueous phase. The adsorption capacity of EDTA-β-CD was investigated. The adsorption studies were performed by batch techniques both in one- and multi-component systems. The effects of pH, contact time and initial concentration were evaluated. The analytical detection methods and characterization methods were presented. EDTA-β-CD adsorbent was synthesized successfully with high EDTA coverage. The maximum REEs uptake was 0.310 mmol g-1 for La(III), 0.337 mmol g-1 for Ce(III) and 0.353 mmol g-1 for Eu(III), respectively. The kinetics of REEs onto EDTA-β-CD fitted well to pseudo-second-order model and the adsorption rate was affected by intra-particle diffusion. The experimental data of one component studies fitted to Langmuir isotherm model indicating the homogeneous surface of the adsorbent. The extended Sips model was applicable for the isotherm studies in three-component system. The electrostatic interaction, chelation and complexation were all involved in the adsorption mechanism. The preconcentration of RE ions and regeneration of EDTA-β-CD were successful. Overall, EDTA-β-CD is an effective adsorbent in adsorption and preconcentration of REEs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in technology have provided new ways of using entertainment and game technology to foster human interaction. Games and playing with games have always been an important part of people’s everyday lives. Traditionally, human-computer interaction (HCI) research was seen as a psychological cognitive science focused on human factors, with engineering sciences as the computer science part of it. Although cognitive science has made significant progress over the past decade, the influence of people’s emotions on design networks is increasingly important, especially when the primary goal is to challenge and entertain users (Norman 2002). Game developers have explored the key issues in game design and identified that the driving force in the success of games is user experience. User-centered design integrates knowledge of users’ activity practices, needs, and preferences into the design process. Geocaching is a location-based treasure hunt game created by a community of players. Players use GPS (Global Position System) technology to find “treasures” and create their own geocaches; the game can be developed when the players invent caches and used more imagination to creations the caches. This doctoral dissertation explores user experience of geocaching and its applications in tourism and education. Globally, based on the Geocaching.com webpage, geocaching has been played about 180 countries and there are more than 10 million registered geocachers worldwide (Geocaching.com, 25.11.2014). This dissertation develops and presents an interaction model called the GameFlow Experience model that can be used to support the design of treasure hunt applications in tourism and education contexts. The GameFlow Model presents and clarifies various experiences; it provides such experiences in a real-life context, offers desirable design targets to be utilized in service design, and offers a perspective to consider when evaluating the success of adventure game concepts. User-centered game designs have adapted to human factor research in mainstream computing science. For many years, the user-centered design approach has been the most important research field in software development. Research has been focusing on user-centered design in software development such as office programs, but the same ideas and theories that will reflect the needs of a user-centered research are now also being applied to game design (Charles et al. 2005.) For several years, we have seen a growing interest in user experience design. Digital games are experience providers, and game developers need tools to better understand the user experience related to products and services they have created. This thesis aims to present what the user experience is in geocaching and treasure hunt games and how it can be used to develop new concepts for the treasure hunt. Engineers, designers, and researchers should have a clear understanding of what user experience is, what its parts are, and most importantly, how we can influence user satisfaction. In addition, we need to understand how users interact with electronic products and people, and how different elements synergize their experiences. This doctoral dissertation represents pioneering work on the user experience of geocaching and treasure hunt games in the context of tourism and education. The research also provides a model for game developers who are planning treasure hunt concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this doctoral thesis, a tomographic STED microscopy technique for 3D super-resolution imaging was developed and utilized to observebone remodeling processes. To improve upon existing methods, wehave used a tomographic approach using a commercially available stimulated emission depletion (STED) microscope. A certain region of interest (ROI) was observed at two oblique angles: one at a standard inverted configuration from below (bottom view) and another from the side (side view) via a micro-mirror positioned close to the ROI. The two viewing angles were reconstructed into a final tomogram. The technique, named as tomographic STED microscopy, was able to achieve an axial resolution of approximately 70 nm on microtubule structures in a fixed biological specimen. High resolution imaging of osteoclasts (OCs) that are actively resorbing bone was achieved by creating an optically transparent coating on a microscope coverglass that imitates a fractured bone surface. 2D super-resolution STED microscopy on the bone layer showed approximately 60 nm of lateral resolution on a resorption associated organelle allowing these structures to be imaged with super-resolution microscopy for the first time. The developed tomographic STED microscopy technique was further applied to study resorption mechanisms of OCs cultured on the bone coating. The technique revealed actin cytoskeleton with specific structures, comet-tails, some of which were facing upwards and some others were facing downwards. This, in our opinion, indicated that during bone resorption, an involvement of the actin cytoskeleton in vesicular exocytosis and endocytosis is present. The application of tomographic STED microscopy in bone biology demonstrated that 3D super-resolution techniques can provide new insights into biological 3D nano-structures that are beyond the diffraction-limit when the optical constraints of super-resolution imaging are carefully taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining, as a heatedly discussed term, has been studied in various fields. Its possibilities in refining the decision-making process, realizing potential patterns and creating valuable knowledge have won attention of scholars and practitioners. However, there are less studies intending to combine data mining and libraries where data generation occurs all the time. Therefore, this thesis plans to fill such a gap. Meanwhile, potential opportunities created by data mining are explored to enhance one of the most important elements of libraries: reference service. In order to thoroughly demonstrate the feasibility and applicability of data mining, literature is reviewed to establish a critical understanding of data mining in libraries and attain the current status of library reference service. The result of the literature review indicates that free online data resources other than data generated on social media are rarely considered to be applied in current library data mining mandates. Therefore, the result of the literature review motivates the presented study to utilize online free resources. Furthermore, the natural match between data mining and libraries is established. The natural match is explained by emphasizing the data richness reality and considering data mining as one kind of knowledge, an easy choice for libraries, and a wise method to overcome reference service challenges. The natural match, especially the aspect that data mining could be helpful for library reference service, lays the main theoretical foundation for the empirical work in this study. Turku Main Library was selected as the case to answer the research question: whether data mining is feasible and applicable for reference service improvement. In this case, the daily visit from 2009 to 2015 in Turku Main Library is considered as the resource for data mining. In addition, corresponding weather conditions are collected from Weather Underground, which is totally free online. Before officially being analyzed, the collected dataset is cleansed and preprocessed in order to ensure the quality of data mining. Multiple regression analysis is employed to mine the final dataset. Hourly visits are the independent variable and weather conditions, Discomfort Index and seven days in a week are dependent variables. In the end, four models in different seasons are established to predict visiting situations in each season. Patterns are realized in different seasons and implications are created based on the discovered patterns. In addition, library-climate points are generated by a clustering method, which simplifies the process for librarians using weather data to forecast library visiting situation. Then the data mining result is interpreted from the perspective of improving reference service. After this data mining work, the result of the case study is presented to librarians so as to collect professional opinions regarding the possibility of employing data mining to improve reference services. In the end, positive opinions are collected, which implies that it is feasible to utilizing data mining as a tool to enhance library reference service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The incredible rapid development to huge volumes of air travel, mainly because of jet airliners that appeared to the sky in the 1950s, created the need for systematic research for aviation safety and collecting data about air traffic. The structured data can be analysed easily using queries from databases and running theseresults through graphic tools. However, in analysing narratives that often give more accurate information about the case, mining tools are needed. The analysis of textual data with computers has not been possible until data mining tools have been developed. Their use, at least among aviation, is still at a moderate level. The research aims at discovering lethal trends in the flight safety reports. The narratives of 1,200 flight safety reports from years 1994 – 1996 in Finnish were processed with three text mining tools. One of them was totally language independent, the other had a specific configuration for Finnish and the third originally created for English, but encouraging results had been achieved with Spanish and that is why a Finnish test was undertaken, too. The global rate of accidents is stabilising and the situation can now be regarded as satisfactory, but because of the growth in air traffic, the absolute number of fatal accidents per year might increase, if the flight safety will not be improved. The collection of data and reporting systems have reached their top level. The focal point in increasing the flight safety is analysis. The air traffic has generally been forecasted to grow 5 – 6 per cent annually over the next two decades. During this period, the global air travel will probably double also with relatively conservative expectations of economic growth. This development makes the airline management confront growing pressure due to increasing competition, signify cant rise in fuel prices and the need to reduce the incident rate due to expected growth in air traffic volumes. All this emphasises the urgent need for new tools and methods. All systems provided encouraging results, as well as proved challenges still to be won. Flight safety can be improved through the development and utilisation of sophisticated analysis tools and methods, like data mining, using its results supporting the decision process of the executives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This master thesis work introduces the fuzzy tolerance/equivalence relation and its application in cluster analysis. The work presents about the construction of fuzzy equivalence relations using increasing generators. Here, we investigate and research on the role of increasing generators for the creation of intersection, union and complement operators. The objective is to develop different varieties of fuzzy tolerance/equivalence relations using different varieties of increasing generators. At last, we perform a comparative study with these developed varieties of fuzzy tolerance/equivalence relations in their application to a clustering method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choice of industrial development options and the relevant allocation of the research funds become more and more difficult because of the increasing R&D costs and pressure for shorter development period. Forecast of the research progress is based on the analysis of the publications activity in the field of interest as well as on the dynamics of its change. Moreover, allocation of funds is hindered by exponential growth in the number of publications and patents. Thematic clusters become more and more difficult to identify, and their evolution hard to follow. The existing approaches of research field structuring and identification of its development are very limited. They do not identify the thematic clusters with adequate precision while the identified trends are often ambiguous. Therefore, there is a clear need to develop methods and tools, which are able to identify developing fields of research. The main objective of this Thesis is to develop tools and methods helping in the identification of the promising research topics in the field of separation processes. Two structuring methods as well as three approaches for identification of the development trends have been proposed. The proposed methods have been applied to the analysis of the research on distillation and filtration. The results show that the developed methods are universal and could be used to study of the various fields of research. The identified thematic clusters and the forecasted trends of their development have been confirmed in almost all tested cases. It proves the universality of the proposed methods. The results allow for identification of the fast-growing scientific fields as well as the topics characterized by stagnant or diminishing research activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityössä on tutkittu reaaliaikaisen toimintolaskennan toteuttamista suomalaisen lasersiruja valmistavan PK-yrityksen tietojärjestelmään. Lisäksi on tarkasteltu toimintolaskennan vaikutuksia operatiiviseen toimintaan sekä toimintojen johtamiseen. Työn kirjallisuusosassa on käsitelty kirjallisuuslähteiden perusteella toimintolaskennan teorioita, laskentamenetelmiä sekä teknisessä toteutuksessa käytettyjä teknologioita. Työn toteutusosassa suunniteltiin ja toteutettiin WWW-pohjainen toimintolaskentajärjestelmä case-yrityksen kustannuslaskennan sekä taloushallinnon avuksi. Työkalu integroitiin osaksi yrityksen toiminnanohjaus- sekä valmistuksenohjausjärjestelmää. Perinteisiin toimintolaskentamallien tiedonkeruujärjestelmiin verrattuna case-yrityksessä syötteet toimintolaskentajärjestelmälle tulevat reaaliaikaisesti osana suurempaa tietojärjestelmäintegraatiota.Diplomityö pyrkii luomaan suhteen toimintolaskennan vaatimusten ja tietokantajärjestelmien välille. Toimintolaskentajärjestelmää yritys voi hyödyntää esimerkiksi tuotteiden hinnoittelussa ja kustannuslaskennassa näkemällä tuotteisiin liittyviä kustannuksia eri näkökulmista. Päätelmiä voidaan tehdä tarkkaan kustannusinformaatioon perustuen sekä määrittää järjestelmän tuottaman datan perusteella, onko tietyn projektin, asiakkuuden tai tuotteen kehittäminen taloudellisesti kannattavaa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kiihtyvä kilpailu yritysten välillä on tuonut yritykset vaikeidenhaasteiden eteen. Tuotteet pitäisi saada markkinoille nopeammin, uusien tuotteiden pitäisi olla parempia kuin vanhojen ja etenkin parempia kuin kilpailijoiden vastaavat tuotteet. Lisäksi tuotteiden suunnittelu-, valmistus- ja muut kustannukset eivät saisi olla suuria. Näiden haasteiden toteuttamisessa yritetään usein käyttää apuna tuotetietoja, niiden hallintaa ja vaihtamista. Andritzin, kuten muidenkin yritysten, on otettava nämä asiat huomioon pärjätäkseen kilpailussa. Tämä työ on tehty Andritzille, joka on maailman johtavia paperin ja sellun valmistukseen tarkoitettujen laitteiden valmistajia ja huoltopalveluiden tarjoajia. Andritz on ottamassa käyttöön ERP-järjestelmän kaikissa toimipisteissään. Sitä halutaan hyödyntää mahdollisimman tehokkaasti, joten myös tuotetiedot halutaan järjestelmään koko elinkaaren ajalta. Osan tuotetiedoista luo Andritzin kumppanit ja alihankkijat, joten myös tietojen vaihto partnereiden välillä halutaan hoitaasiten, että tiedot saadaan suoraan ERP-järjestelmään. Tämän työn tavoitteena onkin löytää ratkaisu, jonka avulla Andritzin ja sen kumppaneiden välinen tietojenvaihto voidaan hoitaa. Tämä diplomityö esittelee tuotetietojen, niiden hallinnan ja vaihtamisen tarkoituksen ja tärkeyden. Työssä esitellään erilaisia ratkaisuvaihtoehtoja tiedonvaihtojärjestelmän toteuttamiseksi. Osa niistä perustuu yleisiin ja toimialakohtaisiin standardeihin. Myös kaksi kaupallista tuotetta esitellään. Tarkasteltavana onseuraavat standardit: PaperIXI, papiNet, X-OSCO, PSK-standardit sekä RosettaNet. Lisäksi työssä tarkastellaan ERP-järjestelmän toimittajan, SAP:in ratkaisuja tietojenvaihtoon. Näistä vaihtoehdoista parhaimpia tarkastellaan vielä yksityiskohtaisemmin ja lopuksi eri ratkaisuja vertaillaan keskenään, jotta löydettäisiin Andritzin tarpeisiin paras vaihtoehto.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena oli selvittää miten asiakkaille suunnattu Internet –pohjainen palvelu muuttaa asiakaspalvelua metsäteollisuudessa, ja mitkä tekijät vaikuttavat tällaisen palvelun omaksumiseen. Tutkimuksesta ilmenee että muutokset asiakaspalvelussa ovat olleet asiakkaiden mielestä pääasiassa positiivisia. Selvisi, että uuden sovelluksen laatuun ollaan tyytyväisiä, mutta että kaikki asiakkaat eivät pidä sen etuja huomattavina verrattuna vanhoihin toimintatapoihin. Esimerkkiyrityksen oma henkilöstö ei keskimäärin koe sovellusta kovin hyödylliseksi oman työnsä kannalta. Jotta sovelluksen käyttöä ja mahdollisia hyötyjä esimerkkiyritykselle saataisiin lisättyä, on sovellukseen tehtävien parannusten lisäksi panostettava tehokkaampaan kommunikointiin ja lisäkoulutukseen asiakkaille. Lisäksi on löydettävä keinoja, joiden avulla asiakkaat kokisivat sovelluksen käytön itselleen edullisemmaksi kuin pitäytymisen vanhoissa toimintatavoissa. Edellä mainitun saavuttaminen edellyttää esimerkkiyrityksen asiakaspalvelu-henkilöstön ja operatiivisen johdon sitoutumista sovelluksen aktiiviseen markkinointiin asiakkaiden suuntaan. Tutkimuksen aineisto kerättiin haastatteluilla ja asiakkaille suunnatulla kyselylomakkeella käyttäen pääasiassa kvalitatiivisia menetelmiä. Haastatteluja tehtiin kahdessa esimerkkiyrityksen yksikössä, kolmessa myyntikonttorissa sekä kahden asiakasyrityksen yhteensä neljässä eri yksikössä. Kyselylomake lähetettiin 215 asiakaskäyttäjälle, joista 30 palautti lomakkeen. Palautusprosentiksi tuli 14.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Verenpaineen kotimittaus − epidemiologia ja kliininen käyttö Kohonnutta verenpainetta, maailmanlaajuisesti merkittävintä ennenaikaiselle kuolemalle altistavaa riskitekijää, ei voida tunnistaa tai hoitaa ilman tarkkoja ja käytännöllisiä verenpaineen mittausmenetelmiä. Verenpaineen kotimittaus on saavuttanut suuren suosion potilaiden keskuudessa. Lääkärit eivät ole kuitenkaan vielä täysin hyväksyneet verenpaineen kotimittausta, sillä riittävä todistusaineisto sen toimivuudesta ja eduista on puuttunut. Tämän tutkimuksen tarkoituksena oli osoittaa, että kotona mitattu verenpaine (kotipaine) on perinteistä vastaanotolla mitattua verenpainetta (vastaanottopaine) tarkempi, ja että se on tehokas myös kliinisessä käytössä. Tutkimme kotipaineen käyttöä verenpainetaudin diagnosoinnissa ja hoidossa. Lisäksi tarkastelimme kotipaineen yhteyttä verenpainetaudin aiheuttamiin kohde-elinvaurioihin. Ensimmäinen aineisto, joka oli edustava otos Suomen aikuisväestöstä, koostui 2 120 45–74-vuotiaasta tutkimushenkilöstä. Tutkittavat mittasivat kotipainettaan viikon ajan ja osallistuivat terveystarkastukseen, johon sisältyi kliinisen tutkimuksen ja haastattelun lisäksi sydänfilmin otto ja vastaanottopaineen mittaus. 758 tutkittavalle suoritettiin lisäksi kaulavaltimon seinämän intima-mediakerroksen paksuuden (valtimonkovettumataudin mittari) mittaus ja 237:lle valtimon pulssiaallon nopeuden (valtimojäykkyyden mittari) mittaus. Toisessa aineistossa, joka koostui 98 verenpainetautia sairastavasta potilaasta, hoitoa ohjattiin satunnaistamisesta riippuen joko ambulatorisen eli vuorokausirekisteröinnillä mitatun verenpaineen tai kotipaineen perusteella. Vastaanottopaine oli kotipainetta merkittävästi korkeampi (systolisen/diastolisen paineen keskiarvoero oli 8/3 mmHg) ja yksimielisyys verenpainetaudin diagnoosissa kahden menetelmän välillä oli korkeintaan kohtalainen (75 %). 593 tutkittavasta, joilla oli kohonnut verenpaine vastaanotolla, 38 %:lla oli normaali verenpaine kotona eli ns. valkotakkiverenpaine. Verenpainetauti voidaan siis ylidiagnosoida joka kolmannella potilaalla seulontatilanteessa. Valkotakkiverenpaine oli yhteydessä lievästi kohonneeseen verenpaineeseen, matalaan painoindeksiin ja tupakoimattomuuteen, muttei psykiatriseen sairastavuuteen. Valkotakkiverenpaine ei kuitenkaan vaikuttaisi olevan täysin vaaraton ilmiö ja voi ennustaa tulevaa verenpainetautia, sillä siitä kärsivien sydän- ja verisuonitautien riskitekijäprofiili oli normaalipaineisten ja todellisten verenpainetautisten riskitekijäprofiilien välissä. Kotipaineella oli vastaanottopainetta vahvempi yhteys verenpainetaudin aiheuttamiin kohde-elinvaurioihin (intima-mediakerroksen paksuus, pulssiaallon nopeus ja sydänfilmistä todettu vasemman kammion suureneminen). Kotipaine oli tehokas verenpainetaudin hoidon ohjaaja, sillä kotipaineeseen ja ambulatoriseen paineeseen, jota on pidetty verenpainemittauksen ”kultaisena standardina”, perustuva lääkehoidon ohjaus johti yhtä hyvään verenpaineen hallintaan. Tämän ja aikaisempien tutkimusten tulosten pohjalta voidaan todeta, että verenpaineen kotimittaus on selkeä parannus perinteiseen vastaanotolla tapahtuvaan verenpainemittaukseen verrattuna. Verenpaineen kotimittaus on käytännöllinen, tarkka ja laajasti saatavilla oleva menetelmä, josta voi tulla jopa ensisijainen vaihtoehto verenpainetautia diagnosoitaessa ja hoitaessa. Verenpaineen mittauskäytäntöön tarvitaan muutos, sillä näyttöön perustuvan lääketieteen perusteella vaikuttaa, että vastaanotolla tapahtuvaa verenpainemittausta tulisi käyttää vain seulontatarkoitukseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Open data refers to publishing data on the web in machine-readable formats for public access. Using open data, innovative applications can be developed to facilitate people‟s lives. In this thesis, based on the open data cases (discussed in the literature review), Open Data Lappeenranta is suggested, which publishes open data related to opening hours of shops and stores in Lappeenranta City. To prove the possibility of creating Open Data Lappeenranta, the implementation of an open data system is presented in this thesis, which publishes specific data related to shops and stores (including their opening hours) on the web in standard format (JSON). The published open data is used to develop web and mobile applications to demonstrate the benefits of open data in practice. Also, the open data system provides manual and automatic interfaces which make it possible for shops and stores to maintain their own data in the system. Finally in this thesis, the completed version of Open Data Lappeenranta is proposed, which publishes open data related to other fields and businesses in Lappeenranta beyond only stores‟ data.