973 resultados para distributed transaction processing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed space-time block codes (DSTBCs) from complex orthogonal designs (CODs) (both square and nonsquare), coordinate interleaved orthogonal designs (CIODs), and Clifford unitary weight designs (CUWDs) are known to lose their single-symbol ML decodable (SSD) property when used in two-hop wireless relay networks using amplify and forward protocol. For such networks, in this paper, three new classes of high rate, training-symbol embedded (TSE) SSD DSTBCs are constructed: TSE-CODs, TSE-CIODs, and TSE-CUWDs. The proposed codes include the training symbols inside the structure of the code which is shown to be the key point to obtain the SSD property along with the channel estimation capability. TSE-CODs are shown to offer full-diversity for arbitrary complex constellations and the constellations for which TSE-CIODs and TSE-CUWDs offer full-diversity are characterized. It is shown that DSTBCs from nonsquare TSE-CODs provide better rates (in symbols per channel use) when compared to the known SSD DSTBCs for relay networks. Important from the practical point of view, the proposed DSTBCs do not contain any zeros in their codewords and as a result, antennas of the relay nodes do not undergo a sequence of switch on/off transitions within every codeword, and, thus, avoid the antenna switching problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workstation clusters equipped with high performance interconnect having programmable network processors facilitate interesting opportunities to enhance the performance of parallel application run on them. In this paper, we propose schemes where certain application level processing in parallel database query execution is performed on the network processor. We evaluate the performance of TPC-H queries executing on a high end cluster where all tuple processing is done on the host processor, using a timed Petri net model, and find that tuple processing costs on the host processor dominate the execution time. These results are validated using a small cluster. We therefore propose 4 schemes where certain tuple processing activity is offloaded to the network processor. The first 2 schemes offload the tuple splitting activity - computation to identify the node on which to process the tuples, resulting in an execution time speedup of 1.09 relative to the base scheme, but with I/O bus becoming the bottleneck resource. In the 3rd scheme in addition to offloading tuple processing activity, the disk and network interface are combined to avoid the I/O bus bottleneck, which results in speedups up to 1.16, but with high host processor utilization. Our 4th scheme where the network processor also performs apart of join operation along with the host processor, gives a speedup of 1.47 along with balanced system resource utilizations. Further we observe that the proposed schemes perform equally well even in a scaled architecture i.e., when the number of processors is increased from 2 to 64

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a problem of providing mean delay and average throughput guarantees in random access fading wireless channels using CSMA/CA algorithm. This problem becomes much more challenging when the scheduling is distributed as is the case in a typical local area wireless network. We model the CSMA network using a novel queueing network based approach. The optimal throughput per device and throughput optimal policy in an M device network is obtained. We provide a simple contention control algorithm that adapts the attempt probability based on the network load and obtain bounds for the packet transmission delay. The information we make use of is the number of devices in the network and the queue length (delayed) at each device. The proposed algorithms stay within the requirements of the IEEE 802.11 standard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The properties of widely used Ni-Ti-based shape memory alloys (SMAs) are highly sensitive to the underlying microstructure. Hence, controlling the evolution of microstructure during high-temperature deformation becomes important. In this article, the ``processing maps'' approach is utilized to identify the combination of temperature and strain rate for thermomechanical processing of a Ni(42)Ti(50)Cu(8) SMA. Uniaxial compression experiments were conducted in the temperature range of 800-1050 degrees C and at strain rate range of 10(-3) and 10(2) s(-1). Two-dimensional power dissipation efficiency and instability maps have been generated and various deformation mechanisms, which operate in different temperature and strain rate regimes, were identified with the aid of the maps and complementary microstructural analysis of the deformed specimens. Results show that the safe window for industrial processing of this alloy is in the range of 800-850 degrees C and at 0.1 s(-1), which leads to grain refinement and strain-free grains. Regions of the instability were identified, which result in strained microstructure, which in turn can affect the performance of the SMA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term Structural Health Monitoring has gained wide acceptance in the recent pastas a means to monitor a structure and provide an early warning of an unsafe conditionusing real-time data. Utilization of structurally integrated, distributed sensors tomonitor the health of a structure through accurate interpretation of sensor signals andreal-time data processing can greatly reduce the inspection burden. The rapidimprovement of the Fiber Bragg Grating sensor technology for strain, vibration andacoustic emission measurements in recent times make them a feasible alternatives tothe traditional strain gauges transducers and conventional Piezoelectric sensors usedfor Non Destructive Evaluation (NDE) and Structural Health Monitoring (SHM).Optical fiber-based sensors offers advantages over conventional strain gauges, PVDFfilm and PZT devices in terms of size, ease of embedment, immunity fromelectromagnetic interference(EMI) and potential for multiplexing a number ofsensors. The objective of this paper is to demonstrate the feasibility of Fiber BraggGrating sensor and compare its utility with the conventional strain gauges and PVDFfilm sensors. For this purpose experiments are being carried out in the laboratory on acomposite wing of a mini air vehicle (MAV). In this paper, the results obtained fromthese preliminary experiments are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper explores the biomass based power generation potential of Africa. Access to electricity in sub-Saharan Africa (SSA) is about 26% and falls to less than 1% in the rural areas. On the basis of the agricultural and forest produce of this region, the residues generated after processing are estimated for all the countries. The paper also addresses the use of gasification technology - an efficient thermo-chemical process for distributed power generation - either to replace fossil fuel in an existing diesel engine based power generation system or to generate electricity using a gas engine. This approach enables the implementation of electrification programs in the rural sector and gives access to grid quality power. This study estimates power generation potential at about 5000 MW and 10,000 MW by using 30% of residues generated during agro processing and 10% of forest residues from the wood processing industry, respectively. A power generation potential of 15000 MW could generate 100 terawatt-hours (TWh), about 15% of current generation in SSA. The paper also summarizes some of the experience in using the biomass gasification technology for power generation in Africa and India. The paper also highlights the techno economics and key barriers to promotion of biomass energy in sub-Saharan Africa. (C) 2011 International Energy Initiative. Published by Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the importance of collective communications in scientific parallel applications, many strategies have been devised for optimizing collective communications for different kinds of parallel environments. There has been an increasing interest to evolve efficient broadcast algorithms for computational grids. In this paper, we present application-oriented adaptive techniques that take into account resource characteristics as well as the application's usage of broadcasts for deriving efficient broadcast trees. In particular, we consider two broadcast parameters used in the application, namely, the broadcast message sizes and the time interval between the broadcasts. The results indicate that our adaptive strategies can provide 20% average improvement in performance over the popular MPICH-G2's MPI_Bcast implementation for loaded network conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low complexity decoders called Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC), which include the Zero Forcing (ZF) and ZF-SIC receivers as special cases, were given by Guo and Xia along with sufficient conditions for a Space-Time Block Code (STBC) to achieve full diversity with PIC/PIC-SIC decoding for point-to-point MIMO channels. In Part-I of this two part series of papers, we give new conditions for an STBC to achieve full diversity with PIC and PIC-SIC decoders, which are equivalent to Guo and Xia's conditions, but are much easier to check. We then show that PIC and PIC-SIC decoders are capable of achieving the full cooperative diversity available in wireless relay networks and give sufficient conditions for a Distributed Space-Time Block Code (DSTBC) to achieve full diversity with PIC and PIC-SIC decoders. In Part-II, we construct new low complexity full-diversity PIC/PIC-SIC decodable STBCs and DSTBCs that achieve higher rates than the known full-diversity low complexity ML decodable STBCs and DSTBCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this second part of a two part series of papers, we construct a new class of Space-Time Block Codes (STBCs) for point-to-point MIMO channel and Distributed STBCs (DSTBCs) for the amplify-and-forward relay channel that give full-diversity with Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC) decoders. The proposed class of STBCs include most of the known full-diversity low complexity PIC/PIC-SIC decodable STBCs as special cases. We also show that a number of known full-diversity PIC/PIC-SIC decodable STBCs that were constructed for the point-topoint MIMO channel can be used as full-diversity PIC/PIC-SIC decodable DSTBCs in relay networks. For the same decoding complexity, the proposed STBCs and DSTBCs achieve higher rates than the known low decoding complexity codes. Simulation results show that the new codes have a better bit error rate performance than the low ML decoding complexity codes available in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How the brain maintains perceptual continuity across eye movements that yield discontinuous snapshots of the world is still poorly understood. In this study, we adapted a framework from the dual-task paradigm, well suited to reveal bottlenecks in mental processing, to study how information is processed across sequential saccades. The pattern of RTs allowed us to distinguish among three forms of trans-saccadic processing (no trans-saccadic processing, trans-saccadic visual processing and trans-saccadic visual processing and saccade planning models). Using a cued double-step saccade task, we show that even though saccade execution is a processing bottleneck, limiting access to incoming visual information, partial visual and motor processing that occur prior to saccade execution is used to guide the next eye movement. These results provide insights into how the oculomotor system is designed to process information across multiple fixations that occur during natural scanning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.