825 resultados para FAULT TOLERANCE
Resumo:
The use of digital communication systems is increasing very rapidly. This is due to lower system implementation cost compared to analogue transmission and at the same time, the ease with which several types of data sources (data, digitised speech and video, etc.) can be mixed. The emergence of packet broadcast techniques as an efficient type of multiplexing, especially with the use of contention random multiple access protocols, has led to a wide-spread application of these distributed access protocols in local area networks (LANs) and a further extension of them to radio and mobile radio communication applications. In this research, a proposal for a modified version of the distributed access contention protocol which uses the packet broadcast switching technique has been achieved. The carrier sense multiple access with collision avoidance (CSMA/CA) is found to be the most appropriate protocol which has the ability to satisfy equally the operational requirements for local area networks as well as for radio and mobile radio applications. The suggested version of the protocol is designed in a way in which all desirable features of its precedents is maintained. However, all the shortcomings are eliminated and additional features have been added to strengthen its ability to work with radio and mobile radio channels. Operational performance evaluation of the protocol has been carried out for the two types of non-persistent and slotted non-persistent, through mathematical and simulation modelling of the protocol. The results obtained from the two modelling procedures validate the accuracy of both methods, which compares favourably with its precedent protocol CSMA/CD (with collision detection). A further extension of the protocol operation has been suggested to operate with multichannel systems. Two multichannel systems based on the CSMA/CA protocol for medium access are therefore proposed. These are; the dynamic multichannel system, which is based on two types of channel selection, the random choice (RC) and the idle choice (IC), and the sequential multichannel system. The latter has been proposed in order to supress the effect of the hidden terminal, which always represents a major problem with the usage of the contention random multiple access protocols with radio and mobile radio channels. Verification of their operation performance evaluation has been carried out using mathematical modelling for the dynamic system. However, simulation modelling has been chosen for the sequential system. Both systems are found to improve system operation and fault tolerance when compared to single channel operation.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
Wireless sensor networks have been identified as one of the key technologies for the 21st century. They consist of tiny devices with limited processing and power capabilities, called motes that can be deployed in large numbers of useful sensing capabilities. Even though, they are flexible and easy to deploy, there are a number of considerations when it comes to their fault tolerance, conserving energy and re-programmability that need to be addressed before we draw any substantial conclusions about the effectiveness of this technology. In order to overcome their limitations, we propose a middleware solution. The proposed scheme is composed based on two main methods. The first method involves the creation of a flexible communication protocol based on technologies such as Mobile Code/Agents and Linda-like tuple spaces. In this way, every node of the wireless sensor network will produce and process data based on what is the best for it but also for the group that it belongs too. The second method incorporates the above protocol in a middleware that will aim to bridge the gap between the application layer and low level constructs such as the physical layer of the wireless sensor network. A fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort towards the deployed applications running in an energy efficient manner inside the network. The proposed scheme is evaluated through a number of trials aiming to test its merits under real time conditions and to identify its effectiveness against other similar approaches. Finally, parameters which determine the characteristics of the proposed scheme are also examined.
Resumo:
Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware solution, In-Motes. In-Motes stands as a fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort the deployed applications to run in an energy efficient manner inside the network. The proposed scheme is evaluated through the In-Motes EYE application, aiming to test its merits under real time conditions. In-Motes EYE application which is an agent based real time In-Motes application developed for sensing acceleration variations in an environment. The application was tested in a prototype area, road alike, for a period of four months.
Resumo:
The software architecture and development consideration for open metadata extraction and processing framework are outlined. Special attention is paid to the aspects of reliability and fault tolerance. Grid infrastructure is shown as useful backend for general-purpose task.
Resumo:
High-power and high-voltage gain dc-dc converters are key to high-voltage direct current (HVDC) power transmission for offshore wind power. This paper presents an isolated ultra-high step-up dc-dc converter in matrix transformer configuration. A flyback-forward converter is adopted as the power cell and the secondary side matrix connection is introduced to increase the power level and to improve fault tolerance. Because of the modular structure of the converter, the stress on the switching devices is decreased and so is the transformer size. The proposed topology can be operated in column interleaved modes, row interleaved modes, and hybrid working modes in order to deal with the varying energy from the wind farm. Furthermore, fault-tolerant operation is also realized in several fault scenarios. A 400-W dc-dc converter with four cells is developed and experimentally tested to validate the proposed technique, which can be applied to high-power high-voltage dc power transmission.
Resumo:
In this paper, a new open-winding control strategy is proposed for a brushless doubly fed reluctance generator (BDFRG) used for stand-alone wind turbine or ship generators. The BDFRG is characterized with two windings on the stator: a power winding and a control winding. The control winding is fed with dual two-level three-phase converters, and a vector control scheme based on space vector pulsewidth modulation is designed. Compared with traditional three-level inverter systems, the dc-link voltage and the voltage rating of power devices in the proposed system are reduced by 50% while still greatly improving the reliability, redundancy, and fault tolerance of the proposed system by increasing the switching modes. Its performance is evaluated by simulation in MATLAB/Simulink and an experimental study on a 42-kW prototype machine.
Resumo:
Electric vehicles (EVs) and hybrid EVs are the way forward for green transportation and for establishing low-carbon economy. This paper presents a split converter-fed four-phase switched reluctance motor (SRM) drive to realize flexible integrated charging functions (dc and ac sources). The machine is featured with a central-tapped winding node, eight stator slots, and six rotor poles (8/6). In the driving mode, the developed topology has the same characteristics as the traditional asymmetric bridge topology but better fault tolerance. The proposed system supports battery energy balance and on-board dc and ac charging. When connecting with an ac power grid, the proposed topology has a merit of the multilevel converter; the charging current control can be achieved by the improved hysteresis control. The energy flow between the two batteries is balanced by the hysteresis control based on their state-of-charge conditions. Simulation results in MATLAB/Simulink and experiments on a 150-W prototype SRM validate the effectiveness of the proposed technologies, which may provide a solution to EV charging issues associated with significant infrastructure requirements.
Resumo:
This dissertation is about the research carried on developing an MPS (Multipurpose Portable System) which consists of an instrument and many accessories. The instrument is portable, hand-held, and rechargeable battery operated, and it measures temperature, absorbance, and concentration of samples by using optical principles. The system also performs auxiliary functions like incubation and mixing. This system can be used in environmental, industrial, and medical applications. ^ Research emphasis is on system modularity, easy configuration, accuracy of measurements, power management schemes, reliability, low cost, computer interface, and networking. The instrument can send the data to a computer for data analysis and presentation, or to a printer. ^ This dissertation includes the presentation of a full working system. This involved integration of hardware and firmware for the micro-controller in assembly language, software in C and other application modules. ^ The instrument contains the Optics, Transimpedance Amplifiers, Voltage-to-Frequency Converters, LCD display, Lamp Driver, Battery Charger, Battery Manager, Timer, Interface Port, and Micro-controller. ^ The accessories are a Printer, Data Acquisition Adapter (to transfer the measurements to a computer via the Printer Port and expand the Analog/Digital conversion capability), Car Plug Adapter, and AC Transformer. This system has been fully evaluated for fault tolerance and the schemes will also be presented. ^
Resumo:
Next generation networks are characterized by ever increasing complexity, intelligence, heterogeneous technologies and increasing user expectations. Telecommunication networks in particular have become truly global, consisting of a variety of national and regional networks, both wired and wireless. Consequently, the management of telecommunication networks is becoming increasingly complex. In addition, network security and reliability requirements require additional overheads which increase the size of the data records. This in turn causes acute network traffic congestions. There is no single network management methodology to control the various requirements of today's networks, and provides a good level of Quality of Service (QoS), and network security. Therefore, an integrated approach is needed in which a combination of methodologies can provide solutions and answers to network events (which cause severe congestions and compromise the quality of service and security). The proposed solution focused on a systematic approach to design a network management system based upon the recent advances in the mobile agent technologies. This solution has provided a new traffic management system for telecommunication networks that is capable of (1) reducing the network traffic load (thus reducing traffic congestion), (2) overcoming existing network latency, (3) adapting dynamically to the traffic load of the system, (4) operating in heterogeneous environments with improved security, and (5) having robust and fault tolerance behavior. This solution has solved several key challenges in the development of network management for telecommunication networks using mobile agents. We have designed several types of agents, whose interactions will allow performing some complex management actions, and integrating them. Our solution is decentralized to eliminate excessive bandwidth usage and at the same time has extended the capabilities of the Simple Network Management Protocol (SNMP). Our solution is fully compatible with the existing standards.
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
The continuous evolution of integrated circuit technology has allowed integrating thousands of transistors on a single chip. This is due to the miniaturization process, which reduces the diameter of wires and transistors. One drawback of this process is that the circuit becomes more fragile and susceptible to break, making the circuit more susceptible to permanent faults during the manufacturing process as well as during their lifetime. Coarse Grained Reconfigurable Architectures (CGRAs) have been used as an alternative to traditional architectures in an attempt to tolerate such faults due to its intrinsic hardware redundancy and high performance. This work proposes a fault tolerance mechanism in a CGRA in order to increase the architecture fault tolerance even considering a high fault rate. The proposed mechanism was added to the scheduler, which is the mechanism responsible for mapping instructions onto the architecture. The instruction mapping occurs at runtime, translating binary code without the need for recompilation. Furthermore, to allow faster implementation, instruction mapping is performed using a greedy module scheduling algorithm, which consists of a software pipeline technique for loop acceleration. The results show that, even with the proposed mechanism, the time for mapping instructions is still in order of microseconds. This result allows that instruction mapping process remains at runtime. In addition, a study was also carried out mapping scheduler rate. The results demonstrate that even at fault rates over 50% in functional units and interconnection components, the scheduler was able to map instructions onto the architecture in most of the tested applications.
Resumo:
High dependability, availability and fault-tolerance are open problems in Service-Oriented Architecture (SOA). The possibility of generating software applications by integrating services from heterogeneous domains, in a reliable way, makes worthwhile to face the challenges inherent to this paradigm. In order to ensure quality in service compositions, some research efforts propose the adoption of verification techniques to identify and correct errors. In this context, exception handling is a powerful mechanism to increase SOA quality. Several research works are concerned with mechanisms for exception propagation on web services, implemented in many languages and frameworks. However, to the extent of our knowledge, no works found evaluates these mechanisms in SOA with regard to the .NET framework. The main contribution of this paper is to evaluate and to propose exception propagation mechanisms in SOA to applications developed within the .NET framework. In this direction, this work: (i)extends a previous study, showing the need to propose a solution to the exception propagation in SOA to applications developed in .NET, and (ii) show a solution, based in model obtained from the results found in (i) and that will be applied in real cases through of faults injections and AOP techniques.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
Energy efficient policies are being applied to network protocols, devices and classical network management systems. Researchers have already studied in depth each of those fields, including for instance a long monitoring processes of various number of individual ICT equipment from where power models are constructed. With the development of smart meters and emerging protocols such as SNMP and NETCONF, currently there is an open field to couple the power models, translated to the expected behavior, with the realtime energy measurements. The goal is to derive a comparison on the power data between both of the processes in the direction of detection for possible deviations on the expected results. The logical assumption is that a fault in the usage of a particular device will not only increase its own energy usage, but also may cause additional consumption on the other devices part of the network. A platform is developed to monitor and analyze the retrieved power data of a simulated enterprise ICT infrastructure. Moreover, smart algorithms are developed which are aware of the different states that are occurring on each device during their typical use phase, as well as to detect and isolate possible anomalies. The produced results are obtained and validated with the use of Cisco switches and routers, Dell Precision stations and Raritan PDU as part of the monitored infrastructure.