49 resultados para distributed simulation pads anonymity tor simulator anonymous cloud computing
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Smart phones became part and parcel of our life, where mobility provides a freedom of not being bounded by time and space. In addition, number of smartphones produced each year is skyrocketing. However, this also created discrepancies or fragmentation among devices and OSes, which in turn made an exceeding hard for developers to deliver hundreds of similar featured applications with various versions for the market consumption. This thesis is an attempt to investigate whether cloud based mobile development platforms can mitigate and eventually eliminate fragmentation challenges. During this research, we have selected and analyzed the most popular cloud based development platforms and tested integrated cloud features. This research showed that cloud based mobile development platforms may able to reduce mobile fragmentation and enable to utilize single codebase to deliver a mobile application for different platforms.
Resumo:
Manufacturing industry has been always facing challenge to improve the production efficiency, product quality, innovation ability and struggling to adopt cost-effective manufacturing system. In recent years cloud computing is emerging as one of the major enablers for the manufacturing industry. Combining the emerged cloud computing and other advanced manufacturing technologies such as Internet of Things, service-oriented architecture (SOA), networked manufacturing (NM) and manufacturing grid (MGrid), with existing manufacturing models and enterprise information technologies, a new paradigm called cloud manufacturing is proposed by the recent literature. This study presents concepts and ideas of cloud computing and cloud manufacturing. The concept, architecture, core enabling technologies, and typical characteristics of cloud manufacturing are discussed, as well as the difference and relationship between cloud computing and cloud manufacturing. The research is based on mixed qualitative and quantitative methods, and a case study. The case is a prototype of cloud manufacturing solution, which is software platform cooperated by ATR Soft Oy and SW Company China office. This study tries to understand the practical impacts and challenges that are derived from cloud manufacturing. The main conclusion of this study is that cloud manufacturing is an approach to achieve the transformation from traditional production-oriented manufacturing to next generation service-oriented manufacturing. Many manufacturing enterprises are already using a form of cloud computing in their existing network infrastructure to increase flexibility of its supply chain, reduce resources consumption, the study finds out the shift from cloud computing to cloud manufacturing is feasible. Meanwhile, the study points out the related theory, methodology and application of cloud manufacturing system are far from maturity, it is still an open field where many new technologies need to be studied.
Resumo:
Cloud Computing paradigm is continually evolving, and with it, the size and the complexity of its infrastructure. Assessing the performance of a Cloud environment is an essential but strenuous task. Modeling and simulation tools have proved their usefulness and powerfulness to deal with this issue. This master thesis work contributes to the development of the widely used cloud simulator CloudSim and proposes CloudSimDisk, a module for modeling and simulation of energy-aware storage in CloudSim. As a starting point, a review of Cloud simulators has been conducted and hard disk drive technology has been studied in detail. Furthermore, CloudSim has been identified as the most popular and sophisticated discrete event Cloud simulator. Thus, CloudSimDisk module has been developed as an extension of CloudSim v3.0.3. The source code has been published for the research community. The simulation results proved to be in accordance with the analytic models, and the scalability of the module has been presented for further development.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Tässä diplomityössä tehtiin käyttäjän opas kehittyneelle prosessisimulointiohjelmistolle APROS 5. Opas on osa VTT Energialle tehtävää APROS 5 käyttäjän koulutuspakettia, joka julkaistaan myöhemmin CD-ROM -muotoisena. Prosessisimulointiohjelmistoa AAPROS 5 voidaan käyttää termohydraulisten prosessien, automaatiopiirien ja sähköjärjestelmien mallinnuksessa. Ohjelma sisältää myös neutroniikkamallin ydinreaktorin käyttäytymisen mallintamiseksi. APROS:in aikaisemmilla UNIX-ympäristössä toimivilla versioilla on toteutettu useita ydinvoimalaitosten turvallisuustutkimukseen liittyviä analyysejä ja sekä ydinvoimalaitosten että konventionaalisten voimalaitosten koulutussimulaattoreita. APROS 5 toimii Windows NT -ympäristössä ja on oleellisesti erilainen käyttää kuin aikaisemmat versiot. Tämän myötä syntyi tarve uudelle käyttäjän oppaalle. Käyttäjän oppaassa esitetään APROS 5:n tärkeimmät toiminnot, mallinnuksen periaatteet ja termohydraulisten ja neutroniikan ratkaisumallit. Lisäksi oppaassa esitetään esimerkki, jossa mallinnetaan yksinkertaistettu VVER-440 -tyyppisen ydinvoimalaitoksen primääripiiri. Yksityiskohtaisempaa tietoa ohjelmistosta on saatavilla APROS 5 -dokumentaatiosta.
Resumo:
Työn tavoitteena oli hankkia ja rakentaa kaupallisilla ohjelmistoilla ja laitteistoilla toteutettu reaaliaikasimulaattori. Työssä keskityttiin erityisesti valmiin Patu 655 puutavarakuormaimen simulointimallin visualisointiin reaaliaikasimulaattoriin hankitulla 3D-animointiohjelmalla. Lisäksi työssä selvitettiin reaaliaikasimuloinnin mahdollisuuksia konejärjestelmän tuotekehityksessä. Reaaliaikasimulaattorina käytettiin dSPACE:n reaaliaikasimulointiin valmistamia kaupallisia laitteita ja ohjelmia. Puutavarakuormaimen simulointimalli käännettiin simulaattorissa suoritettavaksi, jonka jälkeen mallin liikkeet visualisoitiin käyttämällä RealMotion 3D-animointiohjelmaa. Animoitu grafiikka tuotiin sekä AutoCAD- että ADAMS -ohjelmasta. Työn tuloksena saatiin hankittua reaaliaikasimulaattori, jonka havaittiin olevan toimiva kokonaisuus. Puutavarakuormaimen ja muiden simulointimallien automatisoitu kääntö simulaattoriin onnistui hyvin. Simulointimallien visualisointi toimi sujuvasti käytetyn 3D-animointiohjelman avulla. Konejärjestelmien tuotekehitysprosessia havaittiin voitavan nopeuttaa reaaliaikasimulaattorin avulla.
Resumo:
The main objective of this master’s thesis is to provide a comprehensive view to cloud computing and SaaS, and analyze how well CADM, a unit of Capgemini Finland Ltd., would fit to the cloud-based SaaS business. Another objective for this thesis is to investigate how public clouds would fit for CADM as a delivery model, if they would provide SaaS applications to their customers. This master’s thesis is executed by investigating characteristics of cloud computing and SaaS especially from application provider point of view. This is done by exploring what kinds of researches and analysis there have been done regarding these two phenomena during past few years. Then CADM’s current business model and operations are analyzed from SaaS’s and public cloud’s perspective. This analyzing part is conducted by using SWOT analysis which is widely used analytical tool when observing company’s strategic position and when figuring out possibilities how to improve company’s operations. The conducted analysis and observations reveals that CADM should pursue SaaS business as it could provide remarkable advantages and strengthen their position in current markets. However, pure SaaS model would not be the optimal solution for CADM because they do not have own product which could be transformed to SaaS model, and they lack of Infrastructure Management ability. Also public cloud would not be the most suitable delivery model for them if providing SaaS services. The main observation of this thesis is that CADM should adopt the SaaS model via Capgemini Immediate offering.
Resumo:
Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.
Resumo:
Millions of enterprises move their applications to a cloud every year. According to Forrester Research “the global cloud computing market will grow from a $40.7 billion in 2011 to $241 billion in 2020”. Due to increased interests and demand broad range of providers and solutions have appeared in the market. It is vital to be able to predict possible problems correctly and to classify and mitigate risks associated with the migration process. The study will show the main criteria that should be taken into consideration while making decision of moving enterprise applications to the cloud and choosing appropriate vendor. The main goal of the research is to identify main problems during the migration to a cloud and propose a solution for their prevention and mitigation of consequences in case of occurrence. The research provides an overview of existing cloud solutions and deployment models for enterprise applications. It identifies decision drivers of an applications migration to a cloud and potential risks and benefits associated with this. Finally, the best practices for the successful enterprise-to-cloud migration based on the case studies analysis are formulated.
Resumo:
This study aims to open up some typical models of organizational buying behavior. As the cloud computing and cloud services seem to be the today´s hype, the study seeks to further facilitate the understanding of organizational buying behavior regarding cloud services by interviewing a decision maker of this field in the purchaser´s side and also for comparison a cloud service provider´s representative from the vendor´s side.
Resumo:
Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.