19 resultados para pervasive computing
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tietojenksittelyn pkokoelma sijaitsee pkirjastossa (Linnassa), jossa painettu yleis- ja ksikirjastokokoelma koostuu noin 4000 nimekkeest monografioita (painettujen monografiasarjojen osat mukaan lukien). Tietojenksittely-kokoelmasta kartoitettiin nelj osa-aluetta. Nist selvimmksi painopistealaksi osoittautui ohjelmointi, ohjelmointikielet & atk-ohjelmat, joka ksitti noin 33 % nimekkeist (1314). Muiden ryhmien osuudet olivat pienemmt: tietojrjestelmt, tiedonhallinta & tietoturva noin 18 % (727 nimekett); tekoly, tietmystekniikka & hahmontunnistus noin 16 % (629 nimekett). Ksikirjaston karsitussa noin 100 nimekkeen kokoelmassa on runsaasti sanakirjoja ja erilaisia hakuteoksia kuten lhes tydellinen (44/45) Encyclopedia of computer science and technology ja mys e-muodossa oleva 3-osainen Handbook of information security. Painettuja lehti oli 6 nimekett (IEEE Pervasive Computing, MikroPC, mys e-muodossa oleva Social Science Computer Review, Tekniikan nkalat, Tietokone ja Tietoyhteys). Shkkirjoja kokoelmassa oli 466 nimekett Ebrary: Information technology -tietokannassa, 24 nimekett NetLibrary-tietokannassa, 3 nimekett Taylor & Francis eBooks online -tietokannassa ja 2 nimekett shkisin hakuteoksina (Encyclopedia of gender and information technology ja Encyclopedia of information science and technology) sek 4964-osainen Lecture notes in computer science -monografiasarja. Verkkolehti kokoelmassa oli noin 300 nimekett. Tietokantoja oli 4 kokotekstitietokantaa (ACM - Association for Computing Machinery, EBSCOhost Academic Search Premier, Elsevier ScienceDirect ja SpringerLink) sek 2 viitetietokantaa (Computer + Info Systems (CSA) ja Web of Scence (ISI)).
Resumo:
With the ever-growing amount of connected sensors (IoT), making sense of sensed data becomes even more important. Pervasive computing is a key enabler for sustainable solutions, prominent examples are smart energy systems and decision support systems. A key feature of pervasive systems is situation awareness which allows a system to thoroughly understand its environment. It is based on external interpretation of data and thus relies on expert knowledge. Due to the distinct nature of situations in different domains and applications, the development of situation aware applications remains a complex process. This thesis is concerned with a general framework for situation awareness which simplifies the development of applications. It is based on the Situation Theory Ontology to provide a foundation for situation modelling which allows knowledge reuse. Concepts of the Situation Theory are mapped to the Context Space Theory which is used for situation reasoning. Situation Spaces in the Context Space are automatically generated with the defined knowledge. For the acquisition of sensor data, the IoT standards O-MI/O-DF are integrated into the framework. These allow a peer-to-peer data exchange between data publisher and the proposed framework and thus a platform independent subscription to sensed data. The framework is then applied for a use case to reduce food waste. The use case validates the applicability of the framework and furthermore serves as a showcase for a pervasive system contributing to the sustainability goals. Leading institutions, e.g. the United Nations, stress the need for a more resource efficient society and acknowledge the capability of ICT systems. The use case scenario is based on a smart neighbourhood in which the system recommends the most efficient use of food items through situation awareness to reduce food waste at consumption stage.
Resumo:
Tietokonejrjestelmn osien ja ohjelmistojen suorituskykymittauksista saadaan tietoa,jota voidaan kytt suorituskyvyn parantamiseen ja laitteistohankintojen ptksen tukena. Tss tyss tutustutaan suorituskyvyn mittaamiseen ja mittausohjelmiin eli ns. benchmark-ohjelmistoihin. Tyss etsittiin ja arvioitiin eri tyyppisi vapaasti saatavilla olevia benchmark-ohjelmia, jotka soveltuvat Linux-laskentaklusterin suorituskyvynanalysointiin. Benchmarkit ryhmiteltiin ja arvioitiin testaamalla niiden ominaisuuksia Linux-klusterissa. Tyss ksitelln mys mittausten tekemisen ja rinnakkaislaskennan haasteita. Benchmarkkeja lytyi moneen tarkoitukseen ja ne osoittautuivat laadultaan ja laajuudeltaan vaihteleviksi. Niit on mys koottu ohjelmistopaketeiksi, jotta laitteiston suorituskyvyst saisi laajemman kuvan kuin mit yhdell ohjelmalla on mahdollista saada. Olennaista on ymmrt nopeus, jolla dataa saadaan siirrety prosessorille keskusmuistista, levyjrjestelmist ja toisista laskentasolmuista. Tyypillinen benchmark-ohjelma sislt paljon laskentaa tarvitsevan matemaattisen algoritmin, jota kytetn tieteellisiss ohjelmistoissa. Benchmarkista riippuen tulosten ymmrtminen ja hydyntminen voi olla haasteellista.
Resumo:
In metallurgic plants a high quality metal production is always required. Nowadays soft computing applications are more often used for automation of manufacturing process and quality control instead of mechanical techniques. In this thesis an overview of soft computing methods presents. As an example of soft computing application, an effective model of fuzzy expert system for the automotive quality control of steel degassing process was developed. The purpose of this work is to describe the fuzzy relations as quality hypersurfaces by varying number of linguistic variables and fuzzy sets.
Resumo:
This masters thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
Tutkimuksen selvitettiin miten skenaarioanalyysia voidaan kytt uuden teknologian tutkimisessa. Tyss havaittiin, ett skenaarioanalyysin soveltuvuuteen vaikuttaa eniten teknologisen muutoksen taso ja saatavilla olevan tiedon luonne. Skenaariomenetelm soveltuu hyvin uusien teknologioiden tutkimukseen erityisesti radikaalien innovaatioiden kohdalla. Syyn thn on niihin liittyv suuri epvarmuus, kompleksisuus ja vallitsevan paradigman muuttuminen, joiden takia useat muut tulevaisuuden tutkimuksen menetelmt eivt ole tilanteessa kyttkelpoisia. Tyn empiirisess osiossa tutkittiin hilaverkkoteknologian tulevaisuutta skenaarioanalyysin avulla. Hilaverkot nhtiin mahdollisena disruptiivisena teknologiana, joka radikaalina innovaationa saattaa muuttaa tietokonelaskennan nykyisest tuotepohjaisesta laskentakapasiteetin ostamisesta palvelupohjaiseksi. Tll olisi suuri vaikutus koko nykyiseen ICT-toimialaan erityisesti tarvelaskennan hydyntmisen ansiosta. Tutkimus tarkasteli kehityst vuoteen 2010 asti. Teorian ja olemassa olevan tiedon perusteella muodostettiin vahvaan asiantuntijatietouteen nojautuen nelj mahdollista ympristskenaariota hilaverkoille. Skenaarioista huomattiin, ett teknologian kaupallinen menestys on viel monen haasteen takana. Erityisesti luottamus ja lisarvon synnyttminen nousivat trkeimmiksi hilaverkkojen tulevaisuutta ohjaaviksi tekijiksi.
Resumo:
This thesis addresses the problem of computing the minimal and maximal diameter of the Cayley graph of Coxeter groups. We first present and assert relevant parts of polytope theory and related Coxeter theory. After this, a method of contracting the orthogonal projections of a polytope from Rd onto R2 and R3, d 3 is presented. This method is the Equality Set Projection algorithm that requires a constant number of linearprogramming problems per facet of the projection in the absence of degeneracy. The ESP algorithm allows us to compute also projected geometric diameters of high-dimensional polytopes. A representation set of projected polytopes is presented to illustrate the methods adopted in this thesis.
Resumo:
Laser scanning is becoming an increasingly popular method for measuring 3D objects in industrial design. Laser scanners produce a cloud of 3D points. For CAD software to be able to use such data, however, this point cloud needs to be turned into a vector format. A popular way to do this is to triangulate the assumed surface of the point cloud using alpha shapes. Alpha shapes start from the convex hull of the point cloud and gradually refine it towards the true surface of the object. Often it is nontrivial to decide when to stop this refinement. One criterion for this is to do so when the homology of the object stops changing. This is known as the persistent homology of the object. The goal of this thesis is to develop a way to compute the homology of a given point cloud when processed with alpha shapes, and to infer from it when the persistent homology has been achieved. Practically, the computation of such a characteristic of the target might be applied to power line tower span analysis.
Resumo:
The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue Pervasive and trusted network and service infrastructure and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in Pervasive and trusted network and service infrastructure programme 2007. Second the projects funded according to Pervasive and trusted network and service infrastructure-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor indeed, memory resistor whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Valmistustekniikoiden kehittyess IC-piireille saadaan mahtumaan yh enemmn transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmrien suorittamisen aikayksikss. Piirien aktiivisuuden lisntyess mys niiden energiankulutus lisntyy, ja tm puolestaan lis piirin lmmntuotantoa. Liiallinen lmp rajoittaa piirien toimintaa. Tmn takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennetty. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tllaisten laitteiden on oltava energiankulutukseltaan pieni, jotta ne voivat toimia pitki aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritn pienentmn integroitujen piirien energiankulutusta. Periaatteena on kytt piireill pienemp kyttjnnitett kuin piirivalmistaja on niille alunperin suunnitellut. Tm hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyn hyvksymn huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa sst energiankulutuksessa. Tss diplomityss tarkastellaan Near-Threshold Computing -tekniikkaa eri nkkulmista: aluksi perustuen kirjallisuudesta lytyviin aikaisempiin tutkimuksiin, ja myhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteri sek 6T SRAM -solua piirisimulaatioiden avulla. Niden komponenttien kyttytymisen Near-Threshold Computing jnnitteill voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa kytetn 130 nm teknologiaa, ja niiss mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tm valmistuskustannuksiltaan huokea teknologia yhdistettyn Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen jrkevn hintaan. Tmn diplomityn tulokset nyttvt, ett Near-Threshold Computing pienent piirien energiankulutusta merkittvsti. Toisaalta, piirien nopeus heikkenee, ja yleisesti kytetty 6T SRAM -muistisolu muuttuu epluotettavaksi. Pidemmt polut logiikkapiireiss sek transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako kytt normaalia kyttjnnitett, vai laskea sit, jolloin piirin hidastuminen ja epvarmempi kyttytyminen pit ottaa huomioon.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.