111 resultados para distributed computing projects

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The question of Pilot Project creation, due to support pre-development stage of software product elaboration, nowadays might be used as an approach, which allows improving the whole scheme of information technology project running. This subject is not new, but till now no model has been presented, which gives deep description of this important stage on the early phase of project. This Master's Thesis represents the research's results and findings concerning the pre-development study from the Software Engineering point of view. The aspects of feasibility study, pilot prototype developments are analyzed in this paper. As the result, the technique of Pilot Project is formulated and scheme has been presented. The experimental part is focused on particular area Pilot Project scheme's implementation- Internationally Distributed Software projects. The specific characteristic, aspects, obstacles, advantages and disadvantages are considered on the example of cross border region of Russia and Finland. The real case of Pilot Project technique implementation is given.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Web-palvelut muodostavat keskeisen osan semanttista web:iä. Ne mahdollistavat nykyaikaisen ja tehokkaan välineistön hajautettuun laskentaan ja luovat perustan palveluperustaisille arkkitehtuureille. Verkottunut automatisoitu liiketoiminta edellyttää jatkuvaa aktiivisuutta kaikilta osapuolilta. Lisäksi sitä tukevan järjestelmäntulee olla joustava ja sen tulee tukea monipuolista toiminnallisuutta. Nämä tavoitteet voidaan saavuttamaan yhdistämällä web-palveluita. Yhdistämisprosessi muodostuu joukosta tehtäviä kuten esim. palveluiden mallintaminen, palveluiden koostaminen, palveluiden suorittaminen ja tarkistaminen. Työssä on toteutettu yksinkertainen liiketoimintaprosessi. Toteutuksen osalta tarkasteltiin vaihtoehtoisia standardeja ja toteutustekniikoita. Myös suorituksen optimointiin liittyvät näkökulmat pyrittiin ottamaan huomioon.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mobiililaitteisiin tehdyt sovellukset ovat nykyään laajassa käytössä. Mobiilisovellukset tarjoavat käyttäjälleen usein tietyn ennalta määritellyn toiminnallisuuden eivätkä ne pysty mukautumaan vaihtelevaan käyttöympäristöönsä. Jos sovellus olisi tietoinen käyttöympäristöstään ja sen muutoksista, se voisi tarjota käyttäjälleen tilanteeseen sopivia ominaisuuksia. Käyttöympäristöstään tietoiset hajautetut sovellukset tarvitsevat kuitenkin huomattavasti perinteisiä sovelluksia monimutkaisemman arkkitehtuurin toimiakseen. Tässä työssä esitellään hajautetuille ja kontekstitietoisille sovelluksille tarkoitettu ohjelmistoarkkitehtuuri. Työ perustuu Oulun yliopiston CAPNET-tutkimusprojektissa kehitettyyn, mobiilisovelluksille tarkoitettuun arkkitehtuuriin. Tämän työn tarkoituksena on tarjota ratkaisuja niihin puutteisiin, jotka tulivat esille CAPNET-arkkitehtuurin kehitys- ja testausvaiheessa. Esimerkiksi arkkitehtuurin komponenttien määrittelyä tulisi tarkentaa ja ne tulisi jakaa horisontaalisiin kerroksiin niiden ominaisuuksien ja alustariippuvuuden mukaisesti. Työssä luodaan katsaus olemassa oleviin teknologioihin jotka tukevat hajautettujen ja kontekstitietoisten järjestelmien kehittämistä. Myös niiden soveltumista CAPNET-arkkitehtuuriin analysoidaan. Työssä esitellään CAPNET-arkkitehtuuri ja ehdotetaan uutta arkkitehtuuria ja komponenttien kerrosjaottelua. Ehdotuksessa arkkitehtuurin komponentit ja järjestelmän rakenne määritellään ja mallinnetaan UML-menetelmällä. Työn tuloksena on arkkitehtuurimäärittely, joka jakaa nykyisen arkkitehtuurin komponentit kerroksiin. Komponenttien rajapinnat on määritelty selkeästi ja tarkasti. Työ tarjoaa myös projektiryhmälle hyvän lähtökohdan uuden arkkitehtuurin suunnittelulle ja toteuttamiselle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CORBA (Common Object Request Broker Architecture) on laajalle levinnyt ja teollisuudessa yleisesti käytetty hajautetun tietojenkäsittelyn arkkitehtuuri. CORBA skaalautuu eri kokoisiin tarpeisiin ja sitä voidaan hyödynntää myös sulautetuissa langattomissa laitteissa. Oleellista sulautetussa ympäristössä on rakentaa rajapinnat kevytrakenteisiksi, pysyviksi ja helposti laajennettaviksi ilman että yhteensopivuus aikaisempiin rajapintoihin olisi vaarassa. Langattomissa laitteissa resurssit, kuten muistin määrä ja prosessointiteho, ovat hyvin rajalliset, joten rajapinta tulee suunnitella ja toteuttaa optimaalisesti. Palveluiden tulee ottaa huomioon myös langattomuuden rajoitukset, kuten hitaat tiedonsiirtonopeudet ja tiedonsiirron yhteydettömän luonteen. Työssä suunniteltiin ja toteutettiin CORBA-rajapinta GSM-päätelaitteeseen, jonka on todettu täyttävän sille asetetut tavoitteet. Rajapinta tarjoaa kaikki yleisimmät GSM-terminaalin ominaisuudet ja on laajennettavissa tulevia tuotteita ja verkkotekniikoita varten. Laajennettavuutta saavutetaan esimerkiksi kuvaamalla terminaalin ominaisuudet yleisellä kuvauskielellä, kuten XML:lla (Extensible Markup Language).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Verkostoitunut kansainvälinen tuotekehitys on tärkeä osa menestystä nykypäivän muuttuvassa yritysmaailmassa. Toimintojen tehostamiseksi myös projektitoiminnot on sopeutettava kansainväliseen toimintaympäristöön. Kilpailukyvyn säilyttämiseksi projektitoimintoja on lisäksi jatkuvasti tehostettava. Yhtenäkeinona nähdään projektioppiminen, jota voidaan edistää monin eri tavoin. Tässätyössä keskitytään projektitiedonhallinnan kehittämisen tuomiin oppimismahdollisuuksiin. Kirjallisuudessa kerrotaan, että projektitiedon jakaminen ja sen hyödyntäminen seuraavissa projekteissa on eräs projektioppimisen edellytyksistä. Tämäon otettu keskeiseksi näkökulmaksi tässä tutkimuksessa. Lisäksi tutkimusalueen rajaamiseksi työ tarkastelee erityisesti projektioppimista kansainvälisten tuotekehitysprojektien välillä. Työn tavoitteena on esitellä keskeisiä projektioppimisen haasteita ja etsiä konkreettinen ratkaisu vastaamaan näihin haasteisiin. Tuotekehitystoiminnot ja kansainvälinen hajautettu projektiorganisaatio kohtaavat lisäksi erityisiä haasteita, kuten tiedon hajautuneisuus, projektihenkilöstön vaihtuvuus, tiedon luottamuksellisuus ja maantieteelliset haasteet (esim. aikavyöhykkeet ja toimipisteen sijainti). Nämä erityishaasteet on otettu huomioon ratkaisua etsittäessä. Haasteisiin päädyttiin vastaamaan tietotekniikkapohjaisella ratkaisulla, joka suunniteltiin erityisesti huomioiden esimerkkiorganisaation tarpeet ja haasteet. Työssä tarkastellaan suunnitellun ratkaisun vaikutusta projektioppimiseen ja kuinka se vastaa havaittuihin haasteisiin. Tuloksissa huomattiin, että projektioppimista tapahtui, vaikka oppimista oli vaikea suoranaisesti huomata tutkimusorganisaation jäsenten keskuudessa. Projektioppimista voidaan kuitenkin sanoa tapahtuvan, jos projektitieto on helposti koko projektiryhmän saatavilla ja se on hyvin järjesteltyä. Muun muassa nämä ehdot täyttyivät. Projektioppiminen nähdään yleisesti haastavana kehitysalueena esimerkkiorganisaatiossa. Suuri osa tietämyksestä on niin sanottua hiljaistatietoa, jota on hankala tai mahdoton saattaa kirjalliseen muotoon. Näin olleen tiedon siirtäminen jää suurelta osin henkilökohtaisen vuorovaikutuksen varaan. Siitä huolimatta projektioppimista on mahdollista kehittää erilaisin toimintamallein ja menetelmin. Kehitys vaatii kuitenkin resursseja, pitkäjänteisyyttä ja aikaa. Monet muutokset voivat vaatia myös organisaatiokulttuurin muutoksen ja vaikuttamista organisaation jäseniin. Motivaatio, positiiviset mielikuvat ja selkeät strategiset tavoitteet luovat vakaan pohjan projektioppimisen kehittämiselle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital business ecosystems (DBE) are becoming an increasingly popular concept for modelling and building distributed systems in heterogeneous, decentralized and open environments. Information- and communication technology (ICT) enabled business solutions have created an opportunity for automated business relations and transactions. The deployment of ICT in business-to-business (B2B) integration seeks to improve competitiveness by establishing real-time information and offering better information visibility to business ecosystem actors. The products, components and raw material flows in supply chains are traditionally studied in logistics research. In this study, we expand the research to cover the processes parallel to the service and information flows as information logistics integration. In this thesis, we show how better integration and automation of information flows enhance the speed of processes and, thus, provide cost savings and other benefits for organizations. Investments in DBE are intended to add value through business automation and are key decisions in building up information logistics integration. Business solutions that build on automation are important sources of value in networks that promote and support business relations and transactions. Value is created through improved productivity and effectiveness when new, more efficient collaboration methods are discovered and integrated into DBE. Organizations, business networks and collaborations, even with competitors, form DBE in which information logistics integration has a significant role as a value driver. However, traditional economic and computing theories do not focus on digital business ecosystems as a separate form of organization, and they do not provide conceptual frameworks that can be used to explore digital business ecosystems as value drivers—combined internal management and external coordination mechanisms for information logistics integration are not the current practice of a company’s strategic process. In this thesis, we have developed and tested a framework to explore the digital business ecosystems developed and a coordination model for digital business ecosystem integration; moreover, we have analysed the value of information logistics integration. The research is based on a case study and on mixed methods, in which we use the Delphi method and Internetbased tools for idea generation and development. We conducted many interviews with key experts, which we recoded, transcribed and coded to find success factors. Qualitative analyses were based on a Monte Carlo simulation, which sought cost savings, and Real Option Valuation, which sought an optimal investment program for the ecosystem level. This study provides valuable knowledge regarding information logistics integration by utilizing a suitable business process information model for collaboration. An information model is based on the business process scenarios and on detailed transactions for the mapping and automation of product, service and information flows. The research results illustrate the current cap of understanding information logistics integration in a digital business ecosystem. Based on success factors, we were able to illustrate how specific coordination mechanisms related to network management and orchestration could be designed. We also pointed out the potential of information logistics integration in value creation. With the help of global standardization experts, we utilized the design of the core information model for B2B integration. We built this quantitative analysis by using the Monte Carlo-based simulation model and the Real Option Value model. This research covers relevant new research disciplines, such as information logistics integration and digital business ecosystems, in which the current literature needs to be improved. This research was executed by high-level experts and managers responsible for global business network B2B integration. However, the research was dominated by one industry domain, and therefore a more comprehensive exploration should be undertaken to cover a larger population of business sectors. Based on this research, the new quantitative survey could provide new possibilities to examine information logistics integration in digital business ecosystems. The value activities indicate that further studies should continue, especially with regard to the collaboration issues on integration, focusing on a user-centric approach. We should better understand how real-time information supports customer value creation by imbedding the information into the lifetime value of products and services. The aim of this research was to build competitive advantage through B2B integration to support a real-time economy. For practitioners, this research created several tools and concepts to improve value activities, information logistics integration design and management and orchestration models. Based on the results, the companies were able to better understand the formulation of the digital business ecosystem and the importance of joint efforts in collaboration. However, the challenge of incorporating this new knowledge into strategic processes in a multi-stakeholder environment remains. This challenge has been noted, and new projects have been established in pursuit of a real-time economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the new age of Internet of Things (IoT), object of everyday such as mobile smart devices start to be equipped with cheap sensors and low energy wireless communication capability. Nowadays mobile smart devices (phones, tablets) have become an ubiquitous device with everyone having access to at least one device. There is an opportunity to build innovative applications and services by exploiting these devices’ untapped rechargeable energy, sensing and processing capabilities. In this thesis, we propose, develop, implement and evaluate LoadIoT a peer-to-peer load balancing scheme that can distribute tasks among plethora of mobile smart devices in the IoT world. We develop and demonstrate an android-based proof of concept load-balancing application. We also present a model of the system which is used to validate the efficiency of the load balancing approach under varying application scenarios. Load balancing concepts can be apply to IoT scenario linked to smart devices. It is able to reduce the traffic send to the Cloud and the energy consumption of the devices. The data acquired from the experimental outcomes enable us to determine the feasibility and cost-effectiveness of a load balanced P2P smart phone-based applications.