26 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tutkimuksen selvitettiin miten skenaarioanalyysia voidaan käyttää uuden teknologian tutkimisessa. Työssä havaittiin, että skenaarioanalyysin soveltuvuuteen vaikuttaa eniten teknologisen muutoksen taso ja saatavilla olevan tiedon luonne. Skenaariomenetelmä soveltuu hyvin uusien teknologioiden tutkimukseen erityisesti radikaalien innovaatioiden kohdalla. Syynä tähän on niihin liittyvä suuri epävarmuus, kompleksisuus ja vallitsevan paradigman muuttuminen, joiden takia useat muut tulevaisuuden tutkimuksen menetelmät eivät ole tilanteessa käyttökelpoisia. Työn empiirisessä osiossa tutkittiin hilaverkkoteknologian tulevaisuutta skenaarioanalyysin avulla. Hilaverkot nähtiin mahdollisena disruptiivisena teknologiana, joka radikaalina innovaationa saattaa muuttaa tietokonelaskennan nykyisestä tuotepohjaisesta laskentakapasiteetin ostamisesta palvelupohjaiseksi. Tällä olisi suuri vaikutus koko nykyiseen ICT-toimialaan erityisesti tarvelaskennan hyödyntämisen ansiosta. Tutkimus tarkasteli kehitystä vuoteen 2010 asti. Teorian ja olemassa olevan tiedon perusteella muodostettiin vahvaan asiantuntijatietouteen nojautuen neljä mahdollista ympäristöskenaariota hilaverkoille. Skenaarioista huomattiin, että teknologian kaupallinen menestys on vielä monen haasteen takana. Erityisesti luottamus ja lisäarvon synnyttäminen nousivat tärkeimmiksi hilaverkkojen tulevaisuutta ohjaaviksi tekijöiksi.
Resumo:
Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.
Resumo:
This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.
Resumo:
Kandidaatintyö käsittelee Software as a Serviceä (SaaS, verkkosovelluspalvelu) käsitteenä ja ilmiönä. Työssä paneudutaan myös SaaS-mallin nykytilaan ja tulevaisuuden näkymiin. Tarkemmaksi tarkastelukohteeksi työssä on valittu SaaS-pohjainen asiakkuudenhallinta, ja SaaS-pohjaisten asiakkuudenhallintajärjestelmien nykytilan sekä kehityksen yksityiskohtainen analysointi. Työssä käsitellään aihetta alan julkaisujen ja kirjallisuuden avulla. Työssä vertaillaan merkittävimpien SaaS-pohjaisten asiakkuudenhallinta-järjestelmätoimittajien kehitystä Yhdysvalloissa ja Euroopassa sekä heidän ohjelmistojen nykytilaa, vahvuuksia, heikkouksia ja verkkosovelluspalveluiden tulevaisuuden näkymiä. Yhtenä osa-alueena käsitellään myös verkkosovelluspalvelun integrointia yrityksen muihin tietojärjestelmiin. Työssä todetaan, että SaaS on vielä uusi ja yleistyvä käsite ohjelmistoalalla sekä varteenotettava vaihtoehto hankkia yrityksen ohjelmistoja, kuten asiakkuudenhallintajärjestelmä. Tällä hetkellä SaaS-markkinoilla on vielä tilaa kilpailulle, joten tavasta toimittaa ohjelmistot yrityksille palveluna uskotaan yleistyvän tulevaisuudessa.
Resumo:
App Engine on lyhenne englanninkielisistä termeistä application, sovellus ja engine, moottori. Kyseessä on Google, Inc. -konsernin toteuttama kaupallinen palvelu, joka noudattaa pilvimallin tietojenkäsittelyn periaatteita ja mahdollistaa asiakkaan oman sovelluskehityksen. Järjestelmään on mahdollista ohjelmoida itse ideoitu palvelu Internet - verkon välityksellä käytettäväksi, joko yksityisesti tai julkisesti. Kyse on siis hajautetusta palvelinjärjestelmästä, jonka tarjoaa dynaamisesti kuormitukseen sopeutuvan sovellusalustan, jossa asiakas ei vuokraa virtuaalikoneita. Myös järjestelmän tarjoama tallennuskapasiteetti on saatavilla joustavasti. Itse kandidaatintyössä syvennytään yksityiskohtaisemmin sovelluksen toteuttamiseen palvelussa, rajoitteisiin ja soveltuvuuteen. Alussa käydään läpi pilvikäsite, joista monilla tietokoneiden käyttäjillä on epäselvä käsitys. Erilaisia kokonaisuuksia voidaan luoda erittäin monella tavalla, joista rajaamme käsittelyn kohteeksi toteuttamiskelpoiset yleiset ratkaisut.
Resumo:
Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.
Resumo:
Työssä tutkitaan ERP-järjestelmää pilvipalveluna. Tutkimusmenetelmänä on käytetty kirjallisuuskatsausta. Työn tavoitteena on antaa lukijalle kuva siitä, mitä käsitteellä ERP-pilvipalveluna tarkoitetaan, miten se eroaa perinteisestä ERP–järjestelmästä sekä analysoida hyötyjä ja haittoja molemmissa ratkaisuissa. ERP-pilvipalvelulla tarkoitetaan ostettua palvelua, jota hallinnoi ja ylläpitää ulkopuolinen yritys. Järjestelmän käytöstä maksetaan kiinteä kuukausittainen maksu ja käyttäjämääriin perustuva maksu. Yrityksen käyttäessä ERP-pilvipalvelua sen ei tarvitse investoida laitteisiin. ERP-pilvipalvelu voi olla toteutettuna yritykselle varatussa yksityisessä pilvessä tai julkisessa pilvessä. Pilvipalveluna toteutettu ERP tekee kustannusten arvioinnin helpommaksi lyhyellä aikavälillä sen maksurakenteen takia. Pieni alkuinvestointi kannustaa ERP-pilvipalvelun käyttöön varsinkin pienissä yrityksissä. Isot yritykset kuitenkin suosivat perinteisiä ERP–ratkaisuja niiden paremman muokattavuuden takia. Suurin osa ERP–järjestelmistä on perinteisiä ratkaisuja. Yritykset pitävät pilvipalveluna toteutetun ERP:n suurimpana haasteena tietoturvariskejä. ERP-pilvipalvelu tekee yrityksen enemmän riippuvaiseksi palveluntarjoajasta.
Resumo:
Manufacturing industry has been always facing challenge to improve the production efficiency, product quality, innovation ability and struggling to adopt cost-effective manufacturing system. In recent years cloud computing is emerging as one of the major enablers for the manufacturing industry. Combining the emerged cloud computing and other advanced manufacturing technologies such as Internet of Things, service-oriented architecture (SOA), networked manufacturing (NM) and manufacturing grid (MGrid), with existing manufacturing models and enterprise information technologies, a new paradigm called cloud manufacturing is proposed by the recent literature. This study presents concepts and ideas of cloud computing and cloud manufacturing. The concept, architecture, core enabling technologies, and typical characteristics of cloud manufacturing are discussed, as well as the difference and relationship between cloud computing and cloud manufacturing. The research is based on mixed qualitative and quantitative methods, and a case study. The case is a prototype of cloud manufacturing solution, which is software platform cooperated by ATR Soft Oy and SW Company China office. This study tries to understand the practical impacts and challenges that are derived from cloud manufacturing. The main conclusion of this study is that cloud manufacturing is an approach to achieve the transformation from traditional production-oriented manufacturing to next generation service-oriented manufacturing. Many manufacturing enterprises are already using a form of cloud computing in their existing network infrastructure to increase flexibility of its supply chain, reduce resources consumption, the study finds out the shift from cloud computing to cloud manufacturing is feasible. Meanwhile, the study points out the related theory, methodology and application of cloud manufacturing system are far from maturity, it is still an open field where many new technologies need to be studied.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
The RPC Detector Control System (RCS) is the main subject of this PhD work. The project, involving the Lappeenranta University of Technology, the Warsaw University and INFN of Naples, is aimed to integrate the different subsystems for the RPC detector and its trigger chain in order to develop a common framework to control and monitoring the different parts. In this project, I have been strongly involved during the last three years on the hardware and software development, construction and commissioning as main responsible and coordinator. The CMS Resistive Plate Chambers (RPC) system consists of 912 double-gap chambers at its start-up in middle of 2008. A continuous control and monitoring of the detector, the trigger and all the ancillary sub-systems (high voltages, low voltages, environmental, gas, and cooling), is required to achieve the operational stability and reliability of a so large and complex detector and trigger system. Role of the RPC Detector Control System is to monitor the detector conditions and performance, control and monitor all subsystems related to RPC and their electronics and store all the information in a dedicated database, called Condition DB. Therefore the RPC DCS system has to assure the safe and correct operation of the sub-detectors during all CMS life time (more than 10 year), detect abnormal and harmful situations and take protective and automatic actions to minimize consequential damages. The analysis of the requirements and project challenges, the architecture design and its development as well as the calibration and commissioning phases represent themain tasks of the work developed for this PhD thesis. Different technologies, middleware and solutions has been studied and adopted in the design and development of the different components and a big challenging consisted in the integration of these different parts each other and in the general CMS control system and data acquisition framework. Therefore, the RCS installation and commissioning phase as well as its performance and the first results, obtained during the last three years CMS cosmic runs, will be
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
Smart phones became part and parcel of our life, where mobility provides a freedom of not being bounded by time and space. In addition, number of smartphones produced each year is skyrocketing. However, this also created discrepancies or fragmentation among devices and OSes, which in turn made an exceeding hard for developers to deliver hundreds of similar featured applications with various versions for the market consumption. This thesis is an attempt to investigate whether cloud based mobile development platforms can mitigate and eventually eliminate fragmentation challenges. During this research, we have selected and analyzed the most popular cloud based development platforms and tested integrated cloud features. This research showed that cloud based mobile development platforms may able to reduce mobile fragmentation and enable to utilize single codebase to deliver a mobile application for different platforms.
Resumo:
Laser scanning is becoming an increasingly popular method for measuring 3D objects in industrial design. Laser scanners produce a cloud of 3D points. For CAD software to be able to use such data, however, this point cloud needs to be turned into a vector format. A popular way to do this is to triangulate the assumed surface of the point cloud using alpha shapes. Alpha shapes start from the convex hull of the point cloud and gradually refine it towards the true surface of the object. Often it is nontrivial to decide when to stop this refinement. One criterion for this is to do so when the homology of the object stops changing. This is known as the persistent homology of the object. The goal of this thesis is to develop a way to compute the homology of a given point cloud when processed with alpha shapes, and to infer from it when the persistent homology has been achieved. Practically, the computation of such a characteristic of the target might be applied to power line tower span analysis.