884 resultados para Hadoop distributed file system (HDFS)
Resumo:
Tässä diplomityössä esitellään ohjelmistotestauksen ja verifioinnin yleisiä periaatteita sekä käsitellään tarkemmin älypuhelinohjelmistojen verifiointia. Työssä esitellään myös älypuhelimissa käytettävä Symbian-käyttöjärjestelmä. Työn käytännön osuudessa suunniteltiin ja toteutettiin Symbian-käyttöjärjestelmässä toimiva palvelin, joka tarkkailee ja tallentaa järjestelmäresurssien käyttöä. Verifiointi on tärkeä ja kuluja aiheuttava tehtävä älypuhelinohjelmistojen kehityssyklissä. Kuluja voidaan vähentää automatisoimalla osa verifiointiprosessista. Toteutettu palvelin automatisoijärjestelmäresurssien tarkkailun tallentamalla tietoja niistä tiedostoon testien ajon aikana. Kun testit ajetaan uudestaan, uusia tuloksia vertaillaan lähdetallenteeseen. Jos tulokset eivät ole käyttäjän asettamien virherajojen sisällä, siitä ilmoitetaan käyttäjälle. Virherajojen ja lähdetallenteen määrittäminen saattaa osoittautua vaikeaksi. Kuitenkin, jos ne määritetään sopivasti, palvelin tuottaa hyödyllistä tietoa poikkeamista järjestelmäresurssien kulutuksessa testaajille.
Resumo:
The nature of client-server architecture implies that some modules are delivered to customers. These publicly distributed commercial software components are under risk, because users (and simultaneously potential malefactors) have physical access to some components of the distributed system. The problem becomes even worse if interpreted programming languages are used for creation of client side modules. The language Java, which was designed to be compiled into platform independent byte-code is not an exception and runs the additional risk. Along with advantages like verifying the code before execution (to ensure that program does not produce some illegal operations)Java has some disadvantages. On a stage of byte-code a java program still contains comments, line numbers and some other instructions, which can be used for reverse-engineering. This Master's thesis focuses on protection of Java code based client-server applications. I present a mixture of methods to protect software from tortious acts. Then I shall realize all the theoretical assumptions in a practice and examine their efficiency in examples of Java code. One of the criteria's to evaluate the system is that my product is used for specialized area of interactive television.
Resumo:
We propose a new approach and related indicators for globally distributed software support and development based on a 3-year process improvement project in a globally distributed engineering company. The company develops, delivers and supports a complex software system with tailored hardware components and unique end-customer installations. By applying the domain knowledge from operations management on lead time reduction and its multiple benefits to process performance, the workflows of globally distributed software development and multitier support processes were measured and monitored throughout the company. The results show that the global end-to-end process visibility and centrally managed reporting at all levels of the organization catalyzed a change process toward significantly better performance. Due to the new performance indicators based on lead times and their variation with fixed control procedures, the case company was able to report faster bug-fixing cycle times, improved response times and generally better customer satisfaction in its global operations. In all, lead times to implement new features and to respond to customer issues and requests were reduced by 50%.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
This thesis examines the history and evolution of information system process innovation (ISPI) processes (adoption, adaptation, and unlearning) within the information system development (ISD) work in an internal information system (IS) department and in two IS software house organisations in Finland over a 43-year time-period. The study offers insights into influential actors and their dependencies in deciding over ISPIs. The research usesa qualitative research approach, and the research methodology involves the description of the ISPI processes, how the actors searched for ISPIs, and how the relationships between the actors changed over time. The existing theories were evaluated using the conceptual models of the ISPI processes based on the innovationliterature in the IS area. The main focus of the study was to observe changes in the main ISPI processes over time. The main contribution of the thesis is a new theory. The term theory should be understood as 1) a new conceptual framework of the ISPI processes, 2) new ISPI concepts and categories, and the relationships between the ISPI concepts inside the ISPI processes. The study gives a comprehensive and systematic study on the history and evolution of the ISPI processes; reveals the factors that affected ISPI adoption; studies ISPI knowledge acquisition, information transfer, and adaptation mechanisms; and reveals the mechanismsaffecting ISPI unlearning; changes in the ISPI processes; and diverse actors involved in the processes. The results show that both the internal IS department and the two IS software houses sought opportunities to improve their technical skills and career paths and this created an innovative culture. When new technology generations come to the market the platform systems need to be renewed, and therefore the organisations invest in ISPIs in cycles. The extent of internal learning and experiments was higher than the external knowledge acquisition. Until the outsourcing event (1984) the decision-making was centralised and the internalIS department was very influential over ISPIs. After outsourcing, decision-making became distributed between the two IS software houses, the IS client, and itsinternal IT department. The IS client wanted to assure that information systemswould serve the business of the company and thus wanted to co-operate closely with the software organisations.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.
Resumo:
Kiristyvä kansainvälinen kilpailu pakottaa automaatiojärjestelmien valmistajat ottamaan käyttöön uusia menetelmiä, joiden avulla järjestelmien suorituskykyä ja joustavuutta saadaan parannettua. Agenttiteknologiaa on esitetty käytettäväksi olemassa olevien automaatiojärjestelmien kanssa vastaamaan automaatiolle asetettaviin uusiin haasteisiin. Agentit ovat itsenäisiä yhteisöllisiä toimijoita, jotka suorittavat niille ennalta määrättyjä tehtäviä. Ne tarjoavat yhtenäisen kehyksen kehittyneiden toimintojen toteutukselle. Agenttiteknologian avulla automaatiojärjestelmä saadaan toimimaan joustavasti ja vikasietoisesti. Tässä työssä selostetaan agenttiteknologian ajatuksia ja käsitteitä. Lisäksi selvitetään sen soveltuvuutta monimutkaisten ohjausjärjestelmien kehittämiseen ja etsitään käyttökohteita sen soveltamiselle levytehtaassa. Työssä käsitellään myös aatteita, jotka ovat johtaneet agenttiteknologian käyttöön automaatiojärjestelmissä, sekä selostetaan agenttiavusteisen esimerkkisovelluksen rakenne ja testitulokset. Tutkimuksen tuloksena löydettiin useita kohteita agenttiteknologian käytölle levytehtaassa. Esimerkkisovellus osoittaa sen sopivan hyvin kehittyneiden toimintojen toteutukseen automaatiojärjestelmissä.
Resumo:
Erilaisten langattomien päätelaitteiden kuten älypuhelimien ja kommunikaattoreiden määrän lisääntyessä myös kiinnostus liikkuville käyttäjille lisäarvoa tuottavia verkkopalveluita ja -sovelluksia kohtaan kasvaa. Työn tarkoituksena oli tutkia kuinka langattomat Symbianin käyttöjärjestelmää käyttävät laitteet voivat hyödyntää verkkotiedostoja. Työssä arvioitiin eri tiedostojenjakoprotokollien käytettävyyttä langattomissa verkoissa, määriteltiin etätiedostoyhteyden Symbian-alustalle tarjoavan ohjelmiston vaatimukset ja tehtiin alustava suunnitelma ohjelmiston toteuttamiseksi. Läpinäkyvä tiedostojen etäkäyttö vaatii tiedostojenjakoprotokollan toteuttamista sovelluksille yhteisen tiedostosaantimekanismin alle. Tiedostojen etäkäyttö voi perustua eri tiedostojenjakoprotokolliin kuten IP:n päällä toimiviin NFS:ään tai CIFS:ään. Langattomuuden aiheuttamat rajoitukset laitteissa ja tiedonsiirrossa saattavat vähentää sovellutuksen käytettävyyttä ja on huomioitava ohjelmistoa toteutettaessa. Symbian-alusta perustuu asiakas-palvelin arkkitehtuuriin, jossa asiakassovellukset käyttävät tiedostopalveluita yhteisen tiedostopalvelimen kautta. Etätiedostoyhteys on mahdollista toteuttaa liittämällä uusi kirjastomoduuli tiedostopalvelimeen. Protokollan toteuttavan moduulin on muunnettava protokollan viestit tiedostopalvelimelle sopiviksi huolehtien samalla muista samanaikaisista tiedostotapahtumista. Suunniteltu moduulin arkkitehtuuri mahdollistaa eri protokollavaihtoehtojen käyttämisen etätiedostoyhteyden toteuttamiseen.
Resumo:
A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.
Resumo:
The activated sludge process - the main biological technology usually applied towastewater treatment plants (WWTP) - directly depends on live beings (microorganisms), and therefore on unforeseen changes produced by them. It could be possible to get a good plant operation if the supervisory control system is able to react to the changes and deviations in the system and can take thenecessary actions to restore the system’s performance. These decisions are oftenbased both on physical, chemical, microbiological principles (suitable to bemodelled by conventional control algorithms) and on some knowledge (suitable to be modelled by knowledge-based systems). But one of the key problems in knowledge-based control systems design is the development of an architecture able to manage efficiently the different elements of the process (integrated architecture), to learn from previous cases (spec@c experimental knowledge) and to acquire the domain knowledge (general expert knowledge). These problems increase when the process belongs to an ill-structured domain and is composed of several complex operational units. Therefore, an integrated and distributed AIarchitecture seems to be a good choice. This paper proposes an integrated and distributed supervisory multi-level architecture for the supervision of WWTP, that overcomes some of the main troubles of classical control techniques and those of knowledge-based systems applied to real world systems
Resumo:
Workflow management systems aim at the controlled execution of complex application processes in distributed and heterogeneous environments. These systems will shape the structure of information systems in business and non-business environments. E business and system integration is a fertile soil for WF and groupware tools. This thesis aims to study WF and groupware tools in order to gather in house knowledge of WF to better utilize WF solutions in future, and to focus on SAP Business Workflow in order to find a global solution for Application Link Enabling support for system integration. Piloting this solution in Nokia collects the experience of SAP R/3 WF tool for other development projects in future. The literary part of this study will guide to the world of business process automation providing a general description of the history, use and potentials of WF & groupware software. The empirical part of this study begins with the background of the case study describing the IT environment initiating the case by introducing SAP R/3 in Nokia, the communication technique in use and WF tool. Case study is focused in one solution with SAP Business Workflow. This study provides a concept to monitor communication between ERP systems and to increase the quality of system integration. Case study describes a way create support model for ALE/EDI interfaces. Support model includes monitoring organization and the workflow processes to solve the most common IDoc related errors.
Resumo:
The increasing power demand and emerging applications drive the design of electrical power converters into modularization. Despite the wide use of modularized power stage structures, the control schemes that are used are often traditional, in other words, centralized. The flexibility and re-usability of these controllers are typically poor. With a dedicated distributed control scheme, the flexibility and re-usability of the system parts, building blocks, can be increased. Only a few distributed control schemes have been introduced for this purpose, but their breakthrough has not yet taken place. A demand for the further development offlexible control schemes for building-block-based applications clearly exists. The control topology, communication, synchronization, and functionality allocationaspects of building-block-based converters are studied in this doctoral thesis. A distributed control scheme that can be easily adapted to building-block-based power converter designs is developed. The example applications are a parallel and series connection of building blocks. The building block that is used in the implementations of both the applications is a commercial off-the-shelf two-level three-phase frequency converter with a custom-designed controller card. The major challenge with the parallel connection of power stages is the synchronization of the building blocks. The effect of synchronization accuracy on the system performance is studied. The functionality allocation and control scheme design are challenging in the seriesconnected multilevel converters, mainly because of the large number of modules. Various multilevel modulation schemes are analyzed with respect to the implementation, and this information is used to develop a flexible control scheme for modular multilevel inverters.
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.
Resumo:
At the present work the bifurcational behaviour of the solutions of Rayleigh equation and corresponding spatially distributed system is being analysed. The conditions of oscillatory and monotonic loss of stability are obtained. In the case of oscillatory loss of stability, the analysis of linear spectral problem is being performed. For nonlinear problem, recurrent formulas for the general term of the asymptotic approximation of the self-oscillations are found, the stability of the periodic mode is analysed. Lyapunov-Schmidt method is being used for asymptotic approximation. The correlation between periodic solutions of ODE and PDE is being investigated. The influence of the diffusion on the frequency of self-oscillations is being analysed. Several numerical experiments are being performed in order to support theoretical findings.