726 resultados para Fault tolerant computing
Resumo:
We noninvasively detected the characteristics and location of a regional fault in an area of poor bedrock exposure complicated by karst weathering features in the subsurface. Because this regional fault is associated with sinkhole formation, its location is important for hazard avoidance. The bedrock lithologies on either side of the fault trace are similar; hence, we chose an approach that capitalized on the complementary strengths of very low frequency (VLF) electromagnetic, resistivity, and gravity methods. VLF proved most useful as a first-order reconnaissance tool, allowing us to define a narrow target area for further geophysical exploration. Fault-related epikarst was delineated using resistivity. Ultimately, a high-resolution gravity survey and subsequent inverse modeling using the results of the resistivity survey helped to further constrain the location and approximate orientation of the fault. The combined results indicated that the location of the fault trace needed to be adjusted 53 m south of the current published location and was consistent with a north-dipping thrust fault. Additionally, a gravity low south of the fault trace agreed with the location of conductive material from the resistivity and VLF surveys. We interpreted these anomalies to represent enhanced epikarst in the fault footwall. We clearly found that a staged approach involving a progression of methods beginning with a reconnaissance VLF survey, followed by high-resolution gravity and electrical resistivity surveys, can be used to characterize a fault and fault-related karst in an area of poor bedrock surface exposure.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.