912 resultados para Distributed virtualization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron. The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diagnosis of Hridroga (cardiac disorders) in Ayurveda requires the combination of many different types of data, including personal details, patient symptoms, patient histories, general examination results, Ashtavidha pareeksha results etc. Computer-assisted decision support systems must be able to combine these data types into a seamless system. Intelligent agents, an approach that has been used chiefly in business applications, is used in medical diagnosis in this case. This paper is about a multi-agent system named “Distributed Ayurvedic Diagnosis and Therapy System for Hridroga using Agents” (DADTSHUA). It describes the architecture of the DADTSHUA model .This system is using mobile agents and ontology for passing data through the network. Due to this, transport delay can be minimized. It is a system which will be very helpful for the beginning physicians to eliminate his ambiguity in diagnosis and therapy. The system is implemented using Java Agent DEvelopment framework (JADE), which is a java-complaint mobile agent platform from TILab.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we have evolved a generic software architecture for a domain specific distributed embedded system. The system under consideration belongs to the Command, Control and Communication systems domain. The systems in such domain have very long operational lifetime. The quality attributes of these systems are equally important as the functional requirements. The main guiding principle followed in this paper for evolving the software architecture has been functional independence of the modules. The quality attributes considered most important for the system are maintainability and modifiability. Architectural styles best suited for the functionally independent modules are proposed with focus on these quality attributes. The software architecture for the system is envisioned as a collection of architecture styles of the functionally independent modules identified

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Das Grünbuch 2006 der Europäischen Kommission "Eine Europäische Strategie für nachhaltige, wettbewerbsfähige und sichere Energie" unterstreicht, dass Europa in ein neues Energie-Zeitalter eingetreten ist. Die vorrangigen Ziele europäischer Energiepolitik müssen Nachhaltigkeit, Wettbewerbsfähigkeit und Versorgungssicherheit sein, wobei sie eine zusammenhängende und logische Menge von Taktiken und Maßnahmen benötigt, um diese Ziele zu erreichen. Die Strommärkte und Verbundnetze Europas bilden das Kernstück unseres Energiesystems und müssen sich weiterentwickeln, um den neuen Anforderungen zu entsprechen. Die europäischen Stromnetze haben die lebenswichtigen Verbindungen zwischen Stromproduzenten und Verbrauchern mit großem Erfolg seit vielen Jahrzehnten gesichert. Die grundlegende Struktur dieser Netze ist entwickelt worden, um die Bedürfnisse großer, überwiegend auf Kohle aufgebauten Herstellungstechnologien zu befriedigen, die sich entfernt von den Verbraucherzentren befinden. Die Energieprobleme, denen Europa jetzt gegenübersteht, ändern die Stromerzeugungslandschaft in zwei Gesichtspunkten: die Notwendigkeit für saubere Kraftwerkstechnologien verbunden mit erheblich verbesserten Wirkungsgraden auf der Verbraucherseite wird es Kunden ermöglichen, mit den Netzen viel interaktiver zu arbeiten; andererseits müssen die zukünftigen europaweiten Stromnetze allen Verbrauchern eine höchst zuverlässige, preiswerte Energiezufuhr bereitstellen, wobei sowohl die Nutzung von großen zentralisierten Kraftwerken als auch kleineren lokalen Energiequellen überall in Europa ausgeschöpft werden müssen. In diesem Zusammenhang wird darauf hingewiesen, dass die Informationen, die in dieser Arbeit dargestellt werden, auf aktuellen Fragen mit großem Einfluss auf die gegenwärtigen technischen und wirtschaftspolitischen Diskussionen basieren. Der Autor hat während der letzten Jahre viele der hier vorgestellten Schlussfolgerungen und Empfehlungen mit Vertretern der Kraftwerksindustrie, Betreibern von Stromnetzen und Versorgungsbetrieben, Forschungsgremien und den Regulierungsstellen diskutiert. Die folgenden Absätze fassen die Hauptergebnisse zusammen: Diese Arbeit definiert das neue Konzept, das auf mehr verbraucherorientierten Netzen basiert, und untersucht die Notwendigkeiten sowie die Vorteile und die Hindernisse für den Übergang auf ein mögliches neues Modell für Europa: die intelligenten Stromnetze basierend auf starker Integration erneuerbarer Quellen und lokalen Kleinkraftwerken. Das neue Modell wird als eine grundlegende Änderung dargestellt, die sich deutlich auf Netzentwurf und -steuerung auswirken wird. Sie fordert ein europäisches Stromnetz mit den folgenden Merkmalen: – Flexibel: es erfüllt die Bedürfnisse der Kunden, indem es auf Änderungen und neue Forderungen eingehen kann – Zugänglich: es gestattet den Verbindungszugang aller Netzbenutzer besonders für erneuerbare Energiequellen und lokale Stromerzeugung mit hohem Wirkungsgrad sowie ohne oder mit niedrigen Kohlendioxidemissionen – Zuverlässig: es verbessert und garantiert die Sicherheit und Qualität der Versorgung mit den Forderungen des digitalen Zeitalters mit Reaktionsmöglichkeiten gegen Gefahren und Unsicherheiten – Wirtschaftlich: es garantiert höchste Wirtschaftlichkeit durch Innovation, effizientes Energiemanagement und liefert „gleiche Ausgangsbedingungen“ für Wettbewerb und Regulierung. Es beinhaltet die neuesten Technologien, um Erfolg zu gewährleisten, während es die Flexibilität behält, sich an weitere Entwicklungen anzupassen und fordert daher ein zuversichtliches Programm für Forschung, Entwicklung und Demonstration, das einen Kurs im Hinblick auf ein Stromversorgungsnetz entwirft, welches die Bedürfnisse der Zukunft Europas befriedigt: – Netztechnologien, die die Stromübertragung verbessern und Energieverluste verringern, werden die Effizienz der Versorgung erhöhen, während neue Leistungselektronik die Versorgungsqualität verbessern wird. Es wird ein Werkzeugkasten erprobter technischer Lösungen geschaffen werden, der schnell und wirtschaftlich eingesetzt werden kann, so dass bestehende Netze Stromeinleitungen von allen Energieressourcen aufnehmen können. – Fortschritte bei Simulationsprogrammen wird die Einführung innovativer Technologien in die praktische Anwendung zum Vorteil sowohl der Kunden als auch der Versorger stark unterstützen. Sie werden das erfolgreiche Anpassen neuer und alter Ausführungen der Netzkomponenten gewährleisten, um die Funktion von Automatisierungs- und Regelungsanordnungen zu garantieren. – Harmonisierung der ordnungspolitischen und kommerziellen Rahmen in Europa, um grenzüberschreitenden Handel von sowohl Energie als auch Netzdienstleistungen zu erleichtern; damit muss eine Vielzahl von Einsatzsituationen gewährleistet werden. Gemeinsame technische Normen und Protokolle müssen eingeführt werden, um offenen Zugang zu gewährleisten und den Einsatz der Ausrüstung eines jeden Herstellers zu ermöglichen. – Entwicklungen in Nachrichtentechnik, Mess- und Handelssystemen werden auf allen Ebenen neue Möglichkeiten eröffnen, auf Grund von Signalen des Marktes frühzeitig technische und kommerzielle Wirkungsgrade zu verbessern. Es wird Unternehmen ermöglichen, innovative Dienstvereinbarungen zu benutzen, um ihre Effizienz zu verbessern und ihre Angebote an Kunden zu vergrößern. Schließlich muss betont werden, dass für einen erfolgreichen Übergang zu einem zukünftigen nachhaltigen Energiesystem alle relevanten Beteiligten involviert werden müssen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With this document, we provide a compilation of in-depth discussions on some of the most current security issues in distributed systems. The six contributions have been collected and presented at the 1st Kassel Student Workshop on Security in Distributed Systems (KaSWoSDS’08). We are pleased to present a collection of papers not only shedding light on the theoretical aspects of their topics, but also being accompanied with elaborate practical examples. In Chapter 1, Stephan Opfer discusses Viruses, one of the oldest threats to system security. For years there has been an arms race between virus producers and anti-virus software providers, with no end in sight. Stefan Triller demonstrates how malicious code can be injected in a target process using a buffer overflow in Chapter 2. Websites usually store their data and user information in data bases. Like buffer overflows, the possibilities of performing SQL injection attacks targeting such data bases are left open by unwary programmers. Stephan Scheuermann gives us a deeper insight into the mechanisms behind such attacks in Chapter 3. Cross-site scripting (XSS) is a method to insert malicious code into websites viewed by other users. Michael Blumenstein explains this issue in Chapter 4. Code can be injected in other websites via XSS attacks in order to spy out data of internet users, spoofing subsumes all methods that directly involve taking on a false identity. In Chapter 5, Till Amma shows us different ways how this can be done and how it is prevented. Last but not least, cryptographic methods are used to encode confidential data in a way that even if it got in the wrong hands, the culprits cannot decode it. Over the centuries, many different ciphers have been developed, applied, and finally broken. Ilhan Glogic sketches this history in Chapter 6.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this report, we discuss the application of global optimization and Evolutionary Computation to distributed systems. We therefore selected and classified many publications, giving an insight into the wide variety of optimization problems which arise in distributed systems. Some interesting approaches from different areas will be discussed in greater detail with the use of illustrative examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic Programming can be effectively used to create emergent behavior for a group of autonomous agents. In the process we call Offline Emergence Engineering, the behavior is at first bred in a Genetic Programming environment and then deployed to the agents in the real environment. In this article we shortly describe our approach, introduce an extended behavioral rule syntax, and discuss the impact of the expressiveness of the behavioral description to the generation success, using two scenarios in comparison: the election problem and the distributed critical section problem. We evaluate the results, formulating criteria for the applicability of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A foundational model of concurrency is developed in this thesis. We examine issues in the design of parallel systems and show why the actor model is suitable for exploiting large-scale parallelism. Concurrency in actors is constrained only by the availability of hardware resources and by the logical dependence inherent in the computation. Unlike dataflow and functional programming, however, actors are dynamically reconfigurable and can model shared resources with changing local state. Concurrency is spawned in actors using asynchronous message-passing, pipelining, and the dynamic creation of actors. This thesis deals with some central issues in distributed computing. Specifically, problems of divergence and deadlock are addressed. For example, actors permit dynamic deadlock detection and removal. The problem of divergence is contained because independent transactions can execute concurrently and potentially infinite processes are nevertheless available for interaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A distributed method for mobile robot navigation, spatial learning, and path planning is presented. It is implemented on a sonar-based physical robot, Toto, consisting of three competence layers: 1) Low-level navigation: a collection of reflex-like rules resulting in emergent boundary-tracing. 2) Landmark detection: dynamically extracts landmarks from the robot's motion. 3) Map learning: constructs a distributed map of landmarks. The parallel implementation allows for localization in constant time. Spreading of activation computes both topological and physical shortest paths in linear time. The main issues addressed are: distributed, procedural, and qualitative representation and computation, emergent behaviors, dynamic landmarks, minimized communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear graph reduction is a simple computational model in which the cost of naming things is explicitly represented. The key idea is the notion of "linearity". A name is linear if it is only used once, so with linear naming you cannot create more than one outstanding reference to an entity. As a result, linear naming is cheap to support and easy to reason about. Programs can be translated into the linear graph reduction model such that linear names in the program are implemented directly as linear names in the model. Nonlinear names are supported by constructing them out of linear names. The translation thus exposes those places where the program uses names in expensive, nonlinear ways. Two applications demonstrate the utility of using linear graph reduction: First, in the area of distributed computing, linear naming makes it easy to support cheap cross-network references and highly portable data structures, Linear naming also facilitates demand driven migration of tasks and data around the network without requiring explicit guidance from the programmer. Second, linear graph reduction reveals a new characterization of the phenomenon of state. Systems in which state appears are those which depend on certain -global- system properties. State is not a localizable phenomenon, which suggests that our usual object oriented metaphor for state is flawed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on autonomous intelligent systems has focused on how robots can robustly carry out missions in uncertain and harsh environments with very little or no human intervention. Robotic execution languages such as RAPs, ESL, and TDL improve robustness by managing functionally redundant procedures for achieving goals. The model-based programming approach extends this by guaranteeing correctness of execution through pre-planning of non-deterministic timed threads of activities. Executing model-based programs effectively on distributed autonomous platforms requires distributing this pre-planning process. This thesis presents a distributed planner for modelbased programs whose planning and execution is distributed among agents with widely varying levels of processor power and memory resources. We make two key contributions. First, we reformulate a model-based program, which describes cooperative activities, into a hierarchical dynamic simple temporal network. This enables efficient distributed coordination of robots and supports deployment on heterogeneous robots. Second, we introduce a distributed temporal planner, called DTP, which solves hierarchical dynamic simple temporal networks with the assistance of the distributed Bellman-Ford shortest path algorithm. The implementation of DTP has been demonstrated successfully on a wide range of randomly generated examples and on a pursuer-evader challenge problem in simulation.