7 resultados para computer network
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tässä diplomityössä määritellään varmistusjärjestelmän simulointimalli eli varmistusmalli. Varmistusjärjestelmän toiminta optimoidaan kyseisen varmistusmallin avulla. Optimoinnin tavoitteena on parantaa varmistusjärjestelmän tehokkuutta. Parannusta etsitään olemassa olevien varmistusjärjestelmän resurssien maksimaalisella hyödyntämisellä. Varmistusmalli optimoidaan evoluutioalgoritmin avulla. Optimoinnissa on useita tavoitteita, jotka ovat ristiriidassa keskenään. Monitavoiteoptimointiongelma muunnetaan yhden tavoitteen optimointiongelmaksi muodostamalla tavoitefunktio painotetun summan menetelmän avulla. Rinnakkain edellisen menetelmän kanssa käytetään myös Pareto-optimointia. Pareto-optimaalisen rintaman pisteiden etsintä ohjataan lähelle painotetun summan menetelmän optimipistettä. Evoluutioalgoritmin toteutuksessa käytetään hyväksi varmistusjärjestelmiin liittyvää ongelmakohtaista tietoa. Työn tuloksena saadaan varmistusjärjestelmän simulointi- sekä optimointityökalu. Simulointityökalua käytetään kartoittamaan nykyisen varmistusjärjestelmän toimivuutta. Optimoinnin avulla tehostetaan varmistusjärjestelmän toimintaa. Työkalua voidaan käyttää myös uusien varmistusjärjestelmien suunnittelussa sekä nykyisten varmistusjärjestelmien laajentamisessa.
Resumo:
A postgraduate seminar series with a title Cyber Warfare held at the Department of Military Technology of the National Defence University in the fall of 2012. This book is a collection of some of talks that were presented in the seminar. The papers address computer network defence in military cognitive networks, computer network exploitation, non-state actors in cyberspace operations, offensive cyber-capabilities against critical infrastructure and adapting the current national defence doctrine to cyber domain. This set of papers tries to give some insight to current issues of the cyber warfare. The seminar has always made a publication of the papers but this has been an internal publication of the Finnish Defence Forces and has not hindered publication of the papers in international conferences. Publication of these papers in peer reviewed conferences has indeed been always the goal of the seminar, since it teaches writing conference level papers. We still hope that an internal publication in the department series is useful to the Finnish Defence Forces by offering an easy access to these papers.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Työssä kehitettin läpinäkyvä Internet Small Computer Systems Interface-verkkolevyä (iSCSI) käyttävä varmistusjärjestelmä. Verkkolevyn sisältö suojattiin asiakaspään salauskerroksella (dm-crypt). Järjestely mahdollisti sen, että verkkolevylle tallennetut varmuuskopiot pysyivät luottamuksellisina, vaikka levypalvelinta tarjoava taho oli joko epäluotettava tai suorastaan vihamielinen. Järjestelmän hyötykäyttöä varten kehitettiin helppokäyttöinen prototyyppisovellus. Järjestelmän riskit ja haavoittuvuudet käytiin läpi ja analysoitiin. Järjestelmälle tehtiin myös karkea kryptoanalyysi sen teknistenominaisuuksien pohjalta. Suorituskykymittaukset tehtiin sekä salatulle että salaamattomalle iSCSI-liikenteelle. Näistä todettiin, että salauksen vaikutus suorituskykyyn oli häviävän pieni jopa 100 megabittiä sekunnissa siirtävillä verkkonopeuksilla. Lisäksi pohdittiin teknologian muita sovelluskohteita ja tulevia tutkimusalueita.
Resumo:
The networking and digitalization of audio equipment has created a need for control protocols. These protocols offer new services to customers and ensure that the equipment operates correctly. The control protocols used in the computer networks are not directly applicable since embedded systems have resource and cost limitations. In this master's thesis the design and implementation of new loudspeaker control network protocols are presented. The protocol stack was required to be reliable, have short response times, configure the network automatically and support the dynamic addition and removal of loudspeakers. The implemented protocol stack was also required to be as efficient and lightweight as possible because the network nodes are fairly simple and lack processing power. The protocol stack was thoroughly tested, validated and verified. The protocols were formally described using LOTOS (Language of Temporal Ordering Specifications) and verified using reachability analysis. A prototype of the loudspeaker network was built and used for testing the operation and the performance of the control protocols. The implemented control protocol stack met the design specifications and proved to be highly reliable and efficient.
Resumo:
Communication, the flow of ideas and information between individuals in a social context, is the heart of educational experience. Constructivism and constructivist theories form the foundation for the collaborative learning processes of creating and sharing meaning in online educational contexts. The Learning and Collaboration in Technology-enhanced Contexts (LeCoTec) course comprised of 66 participants drawn from four European universities (Oulu, Turku, Ghent and Ramon Llull). These participants were split into 15 groups with the express aim of learning about computer-supported collaborative learning (CSCL). The Community of Inquiry model (social, cognitive and teaching presences) provided the content and tools for learning and researching the collaborative interactions in this environment. The sampled comments from the collaborative phase were collected and analyzed at chain-level and group-level, with the aim of identifying the various message types that sustained high learning outcomes. Furthermore, the Social Network Analysis helped to view the density of whole group interactions, as well as the popular and active members within the highly collaborating groups. It was observed that long chains occur in groups having high quality outcomes. These chains were also characterized by Social, Interactivity, Administrative and Content comment-types. In addition, high outcomes were realized from the high interactive cases and high-density groups. In low interactive groups, commenting patterned around the one or two central group members. In conclusion, future online environments should support high-order learning and develop greater metacognition and self-regulation. Moreover, such an environment, with a wide variety of problem solving tools, would enhance interactivity.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.