956 resultados para fault-tolerant quantum computation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La E/S Paralela es un área de investigación que tiene una creciente importancia en el cómputo de Altas Prestaciones. Si bien durante años ha sido el cuello de botella de los computadores paralelos en la actualidad, debido al gran aumento del poder de cómputo, el problema de la E/S se ha incrementado y la comunidad del Cómputo de Altas Prestaciones considera que se debe trabajar en mejorar el sistema de E/S de los computadores paralelos, para lograr cubrir las exigencias de las aplicaciones científicas que usan HPC. La Configuración de la Entrada/Salida (E/S) Paralela tiene una gran influencia en las prestaciones y disponibilidad, por ello es importante “Analizar configuraciones de E/S paralela para identificar los factores claves que influyen en las prestaciones y disponibilidad de la E/S de Aplicaciones Científicas que se ejecutan en un clúster”. Para realizar el análisis de las configuraciones de E/S se propone una metodología que permite identificar los factores de E/S y evaluar su influencia para diferentes configuraciones de E/S formada por tres fases: Caracterización, Configuración y Evaluación. La metodología permite analizar el computador paralelo a nivel de Aplicación Científica, librerías de E/S y de arquitectura de E/S, pero desde el punto de vista de la E/S. Los experimentos realizados para diferentes configuraciones de E/S y los resultados obtenidos indican la complejidad del análisis de los factores de E/S y los diferentes grados de influencia en las prestaciones del sistema de E/S. Finalmente se explican los trabajos futuros, el diseño de un modelo que de soporte al proceso de Configuración del sistema de E/S paralela para aplicaciones científicas. Por otro lado, para identificar y evaluar los factores de E/S asociados con la disponibilidad a nivel de datos, se pretende utilizar la Arquitectura Tolerante a Fallos RADIC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tolerancia a fallos es una línea de investigación que ha adquirido una importancia relevante con el aumento de la capacidad de cómputo de los súper-computadores actuales. Esto es debido a que con el aumento del poder de procesamiento viene un aumento en la cantidad de componentes que trae consigo una mayor cantidad de fallos. Las estrategias de tolerancia a fallos actuales en su mayoría son centralizadas y estas no escalan cuando se utiliza una gran cantidad de procesos, dado que se requiere sincronización entre todos ellos para realizar las tareas de tolerancia a fallos. Además la necesidad de mantener las prestaciones en programas paralelos es crucial, tanto en presencia como en ausencia de fallos. Teniendo en cuenta lo citado, este trabajo se ha centrado en una arquitectura tolerante a fallos descentralizada (RADIC – Redundant Array of Distributed and Independant Controllers) que busca mantener las prestaciones iniciales y garantizar la menor sobrecarga posible para reconfigurar el sistema en caso de fallos. La implementación de esta arquitectura se ha llevado a cabo en la librería de paso de mensajes denominada Open MPI, la misma es actualmente una de las más utilizadas en el mundo científico para la ejecución de programas paralelos que utilizan una plataforma de paso de mensajes. Las pruebas iniciales demuestran que el sistema introduce mínima sobrecarga para llevar a cabo las tareas correspondientes a la tolerancia a fallos. MPI es un estándar por defecto fail-stop, y en determinadas implementaciones que añaden cierto nivel de tolerancia, las estrategias más utilizadas son coordinadas. En RADIC cuando ocurre un fallo el proceso se recupera en otro nodo volviendo a un estado anterior que ha sido almacenado previamente mediante la utilización de checkpoints no coordinados y la relectura de mensajes desde el log de eventos. Durante la recuperación, las comunicaciones con el proceso en cuestión deben ser retrasadas y redirigidas hacia la nueva ubicación del proceso. Restaurar procesos en un lugar donde ya existen procesos sobrecarga la ejecución disminuyendo las prestaciones, por lo cual en este trabajo se propone la utilización de nodos spare para la recuperar en ellos a los procesos que fallan, evitando de esta forma la sobrecarga en nodos que ya tienen trabajo. En este trabajo se muestra un diseño propuesto para gestionar de un modo automático y descentralizado la recuperación en nodos spare en un entorno Open MPI y se presenta un análisis del impacto en las prestaciones que tiene este diseño. Resultados iniciales muestran una degradación significativa cuando a lo largo de la ejecución ocurren varios fallos y no se utilizan spares y sin embargo utilizándolos se restablece la configuración inicial y se mantienen las prestaciones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu final d'aquest projecte és realitzar un Sistema Traçador d' Errors, però potser mésimportant és l'objectiu d'aprendre noves tecnologies, que sovint estan a disposició de l'usuari però l'usuari les desconeix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Korkeasaatavuus on olennainen osa nykyaikaisissa, integroiduissa yritysjärjestelmissä. Yritysten kansainvälistyessä tiedon on oltava saatavissa ympärivuorokautisesti, mikä asettaa yhä kovempia vaatimuksia järjestelmän yksittäisten osien saatavuudelle. Kasvava tietojärjestelmäintegraatio puolestaan tekee järjestelmän solmukohdista kriittisiä liiketoiminnan kannalta. Tässä työssä perehdytään hajautettujen järjestelmien ominaisuuksiin ja niiden asettamiin haasteisiin. Esiteltyjä teknologioita ovat muun muassa väliohjelmistot, klusterit ja kuormantasaus. Yrityssovellusten pohjana käytetty Java 2 Enterprise Edition (J2EE) -teknologia käsitellään olennaisilta osiltaan. Työssä käytetään sovelluspalvelinalustana BEA WebLogic Server -ohjelmistoa, jonka ominaisuudet käydään läpi hajautuksen kannalta. Työn käytännön osuudessa toteutetaan kahdelle erilaiselle olemassa olevalle yrityssovellukselle korkean saatavuuden sovelluspalvelinympäristö, joissa sovellusten asettamat rajoitukset on otettu huomioon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kiristyvä kansainvälinen kilpailu pakottaa automaatiojärjestelmien valmistajat ottamaan käyttöön uusia menetelmiä, joiden avulla järjestelmien suorituskykyä ja joustavuutta saadaan parannettua. Agenttiteknologiaa on esitetty käytettäväksi olemassa olevien automaatiojärjestelmien kanssa vastaamaan automaatiolle asetettaviin uusiin haasteisiin. Agentit ovat itsenäisiä yhteisöllisiä toimijoita, jotka suorittavat niille ennalta määrättyjä tehtäviä. Ne tarjoavat yhtenäisen kehyksen kehittyneiden toimintojen toteutukselle. Agenttiteknologian avulla automaatiojärjestelmä saadaan toimimaan joustavasti ja vikasietoisesti. Tässä työssä selostetaan agenttiteknologian ajatuksia ja käsitteitä. Lisäksi selvitetään sen soveltuvuutta monimutkaisten ohjausjärjestelmien kehittämiseen ja etsitään käyttökohteita sen soveltamiselle levytehtaassa. Työssä käsitellään myös aatteita, jotka ovat johtaneet agenttiteknologian käyttöön automaatiojärjestelmissä, sekä selostetaan agenttiavusteisen esimerkkisovelluksen rakenne ja testitulokset. Tutkimuksen tuloksena löydettiin useita kohteita agenttiteknologian käytölle levytehtaassa. Esimerkkisovellus osoittaa sen sopivan hyvin kehittyneiden toimintojen toteutukseen automaatiojärjestelmissä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä työ on tehty Lappeenrannan teknilliselle yliopistolle, joka on suunnitellut ja toteuttanut hybridibussin. Hybridibussin ajomoottorissa käytetään kaksoiskäämitystä, joka mahdollistaa bussin ajamisen vikatilanteessa, jossa toinen käämityksistä on epäkunnossa. Työn tavoitteena on selvittää, millainen kaksoiskäämitys toimii parhaiten tämän hybridibussin kestomagneettiajomoottorissa. Työssä tutustutaan ajomoottoreihin ja niiltä vaadittaviin ominaisuuksiin sekä vikasietoisiin sähkömoottoreihin. Tutkimuksessa löydettyihin vikasietoisiin ajomoottoreihin perustuen päädyttiin neljään kaksoiskäämitysvaihtoehtoon. Näitä kaksoiskäämityksiä tutkittiin FE-analyysiä hyödyntäen. Kaksoiskäämitysten toimintaa simuloitiin nimellis- ja vikatilanteessa. Simuloinnin tuloksista selvisi, että kaksoiskäämitys, jossa jokaisessa urassa oli puolet yhtä käämitystä ja puolet toista (kaksoiskäämitys 1), ei toiminut kunnolla nimellistilanteessa eikä vikatilanteessa. Suurin ongelma oli vikatilanteessa aiheutuva suuri oikosulkuvirta. Kaksoiskäämitys, jossa kaksi napaa oli samaa käämitystä (kaksoiskäämitys 2), toimi moitteettomasti nimellistilanteessa. Vikatilanteen toiminnassa kuitenkin havaittiin epäjaksollisuutta magneettivuontiheydessä, mikä on haitallista moottorin käynnille ja vaaraksi roottorille. Kaksoiskäämityksiä, joista ensimmäisessä oli neljäsosa konetta samaa käämitystä (kaksoiskäämitys 3) ja toisessa puolet koneesta samaa käämitystä (kaksoiskäämitys 4), tutkittiin vikatilanteessa vain magneettivuontiheyden osalta. Puolet ja puolet käämityn koneen osalta magneettivuontiheys osottautui epäjaksolliseksi kuten oli odotettu. Neljäsosiksi käämityn koneen magneettivuontiheys oli säännöllisen jaksollinen. Nimellispisteessä kaksoiskäämityksillä 3 ja 4 havaittiin suuri vääntöväre verrattuna kaksoiskäämityksiin 1 ja 2. Johtopäätöksenä kaksoiskäämitys 3 vaikuttaa lupaavalta, mikäli suuri nimellispisteen vääntöväre saadaan hallintaan käyttämällä uravinoutta staattorissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Transit network provides high-speed, low-latency, fault-tolerant interconnect for high-performance, multiprocessor computers. The basic connection scheme for Transit uses bidelta style, multistage networks to support up to 256 processors. Scaling to larger machines by simply extending the bidelta network topology will result in a uniform degradation of network latency between all processors. By employing a fat-tree network structure in larger systems, the network provides locality and universality properties which can help minimize the impact of scaling on network latency. This report details the topology and construction issues associated with integrating Transit routing technology into fat-tree interconnect topologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report addresses the problem of achieving cooperation within small- to medium- sized teams of heterogeneous mobile robots. I describe a software architecture I have developed, called ALLIANCE, that facilitates robust, fault tolerant, reliable, and adaptive cooperative control. In addition, an extended version of ALLIANCE, called L-ALLIANCE, is described, which incorporates a dynamic parameter update mechanism that allows teams of mobile robots to improve the efficiency of their mission performance through learning. A number of experimental results of implementing these architectures on both physical and simulated mobile robot teams are described. In addition, this report presents the results of studies of a number of issues in mobile robot cooperation, including fault tolerant cooperative control, adaptive action selection, distributed control, robot awareness of team member actions, improving efficiency through learning, inter-robot communication, action recognition, and local versus global control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu general d'aquest treball és trobar i mostrar una eina que permeti obtenir una representació dels senyals procedents de sistemes dinàmics adequada a les necessitats dels sistemes de Supervisió Experta de processos. Aquest objectiu general es pot subdividir en diverses parts, que són tractades en els diferents capítols que composen el treball i que es poden resumir en els següents punts: En primer lloc, cal conèixer les necessitats dels sistemes de Supervisió: La gran quantitat de dades que provenen dels processos fa necessari el tractament d'aquestes dades per obtenir-ne d'altres, més elaborades, amb un nivell més elevat de representació. La utilització de raonament qualitatiu, pròpia dels éssers humans, comporta la necessitat de representar simbòlicament els senyals, de traduir les dades numèriques en símbols. La Supervisió de sistemes dinàmics comporta que el temps sigui una variable fonamental, la asincronia dels esdeveniments significatius per a la Supervisió fa que les representacions més adequades i útils dels senyals siguin asíncrones. Finalment,l'ús dels coneixements experimentals en la Supervisió dels processos comporta que les representacions més naturals siguin les més útils. Aquestes necessitats fan de la representació dels senyals mitjançant episodis l'eina amb més possibilitats per assolir els objectius que es volen assolir. Per això, es presenta un formalisme que permet descriure i incloure-hi la formalització i les diferents aproximacions a aquest tipus de representació ja existents i, al mateix temps, augmentar-ne la significació a través de característiques dels senyals que no es tenen en compte en les aproximacions ja existents. El següent pas és aprofitar el nou formalisme per obtenir una nova representació amb un grau més gran de significació, cosa que s'aconsegueix representant explícitament les discontinuïtats i els períodes estacionaris o d'estabilitat, molt significatius en Supervisió de processos. Un problema sempre present en el tractament de senyals és el soroll que els afecta. Per aquest motiu es presenta un mètode que permet filtrar el soroll de manera que les representacions resultants quedin afectades el mínim possible per aquest tractament. Finalment, es presenta l'aplicació en línia de les eines descrites. La representació en línia dels senyals comporta el tractament de la incertesa inherent al coneixement parcial del senyal (un episodi no pot ser determinat i caracteritzat completament fins que no s'acaba). L'obtenció de resultats amb determinats graus de certesa és perfectament coherent amb la seva utilització posterior mitjançant Sistemes Experts o altres eines de la IA. Totes les aportacions del treball vénen acompanyades d'exemples i/o aplicacions que permeten observar-ne la utilitat i les limitacions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents how workflow-oriented, single-user Grid portals could be extended to meet the requirements of users with collaborative needs. Through collaborative Grid portals different research and engineering teams would be able to share knowledge and resources. At the same time the workflow concept assures that the shared knowledge and computational capacity is aggregated to achieve the high-level goals of the group. The paper discusses the different issues collaborative support requires from Grid portal environments during the different phases of the workflow-oriented development work. While in the design period the most important task of the portal is to provide consistent and fault tolerant data management, during the workflow execution it must act upon the security framework its back-end Grids are built on.