968 resultados para Distributed Dislocation Dipole Technique
Resumo:
A new experimental technique is proposed to determine refractive indices of liquids and isotropic solids at different wavelengths. A Pellin-Broca hollow prism filled with a liquid sample produces the spectrum (of the liquid prism) on the photographic plate of the camera. A plane reflector, mounted at a small angle to the normal of the exit face of the prism, also forms a direct image of the collimator slit in the plane of the same photographic plate. All the necessary information for determining the refractive indices (for different wavelengths) is extracted directly from the spectrogram without using any goniometric system. Experiments are conducted with the liquid prisms of isopropyl alcohol, water, and benzene. The results of the experiments are compared with those obtained by a Pulfrich refractometer (critical angle method). The proposed new technique gives the refractive indices for visible and invisible spectral lines to an accuracy of 2x10(-5). (C) 1997 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Ultra low-load-dynamic microhardness testing facilitates the hardness measurements in a very low volume of the material and thus is suited for characterization of the interfaces in MMC's. This paper details the studies on age-hardening behavior of the interfaces in Al-Cu-5SiC(p) composites characterized using this technique. Results of hardness studies have been further substantiated by TEM observations. In the solution-treated condition, hardness is maximum at the particle/matrix interface and decreases with increasing distance from the interface. This could be attributed to the presence of maximum dislocation density at the interface which decreases with increasing distance from the interface. In the case of composites subjected to high temperature aging, hardening at the interface is found to be faster than the bulk matrix and the aging kinetics becomes progressively slower with increasing distance from the interface. This is attributed to the dislocation density gradient at the interface, leading to enhanced nucleation and growth of precipitates at the interface compared to the bulk matrix. TEM observations reveal that the sizes of the precipitates decrease with increasing distance from the interface and thus confirms the retardation in aging kinetics with increasing distance from the interface.
Resumo:
Thin films of barium strontium titanate (BST) including BaTiO3 and SrTiO3 end members were deposited using the metallo-organic decomposition (MOD) technique. Processing parameters such as nonstoichiometry, annealing temperature and time, film thickness and doping concentration were correlated with the structural and electrical properties of the films. A random polycrystalline structure was observed for all MOD films under the processing conditions in this study. The microstructures of the films showed multi-grains structure through the film thickness. A dielectric constant of 563 was observed for (Ba0.7Sr0.3)TiO3 films rapid thermal annealed at 750 degrees C for 60 s. The dielectric constant increased with annealing temperature and film thickness, while the dielectric constant could reach the bulk values for thicknesses as thin as similar to 0.3 mu m. Nonstoichiometry and doping in the films resulted in a lowering of the dielectric constant. For near-stoichiometric films, a small dielectric dispersion obeying the Curie-von Schweidler type dielectric response was observed. This behavior may be attributed to the presence of the high density of disordered grain boundaries. All MOD processed films showed trap-distributed space-charge limited conduction (SCLC) behavior with slope of similar to 7.5-10 regardless of the chemistry and processing parameter due to the presence of main boundaries through the film thickness. The grain boundaries masked the effect of donor-doping, so that all films showed distributed-trap SCLC behavior without discrete-traps. Donor-doping could significantly improve the time-dependent dielectric breakdown behavior of BST thin films, mostly likely due to the lower oxygen vacancy concentration resulted from donor-doping. From the results of charge storage density, leakage current and time-dependent dielectric breakdown behavior, BST thin films are found to be promising candidates for 64 and 256Mb ULSI DRAM applications. (C) 1997 Elsevier Science S.A.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
The statistical performance analysis of ESPRIT, root-MUSIC, minimum-norm methods for direction estimation, due to finite data perturbations, using the modified spatially smoothed covariance matrix, is developed. Expressions for the mean-squared error in the direction estimates are derived based on a common framework. Based on the analysis, the use of the modified smoothed covariance matrix improves the performance of the methods when the sources are fully correlated. Also, the performance is better even when the number of subarrays is large unlike in the case of the conventionally smoothed covariance matrix. However, the performance for uncorrelated sources deteriorates due to an artificial correlation introduced by the modified smoothing. The theoretical expressions are validated using extensive simulations. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
A natural velocity field method for shape optimization of reinforced concrete (RC) flexural members has been demonstrated. The possibility of shape optimization by modifying the shape of an initially rectangular section, in addition to variation of breadth and depth along the length, has been explored. Necessary shape changes have been computed using the sequential quadratic programming (SQP) technique. Genetic algorithm (Goldberg and Samtani 1986) has been used to optimize the diameter and number of main reinforcement bars. A limit-state design approach has been adopted for the nonprismatic RC sections. Such relevant issues as formulation of optimization problem, finite-element modeling, and solution procedure have been described. Three design examples-a simply supported beam, a cantilever beam, and a two-span continuous beam, all under uniformly distributed loads-have been optimized. The results show a significant savings (40-56%) in material and cost and also result in aesthetically pleasing structures. This procedure will lead to considerable cost saving, particularly in cases of mass-produced precast members and a heavy cast-in-place member such as a bridge girder.
Resumo:
This paper is aimed at investigating the acoustic emission activities during indentation toughness tests on an alumina based wear resistant ceramic and 25 wt% silicon carbide whisker (SIC,) reinforced alumina composite. It has been shown that the emitted acoustic emission signals characterize the crack growth during loading. and unloading cycles in an indentation test. The acoustic emission results indicate that in the case of the composite the amount of crack growth during unloading is higher than that of loading, while the reverse is true in case of the wear resistant ceramics. Acoustic emission activity observed in wear resistant ceramic is less than that in the case of composite. An attempt has been made to correlate the acoustic emission signals with crack growth during indentation test.
Resumo:
Radially homogeneous bulk alloys of GaxIn1-xSb in the range 0.7 < x < 0.8, have been grown by vertical Bridgman technique. The factors affecting the interface shape during the growth were optimised to achieve zero convexity. From a series of experiments, a critical ratio of the temperature gradient (G) of the furnace at the melting point of the melt composition to the ampoule lowering speed (v) was deduced for attaining the planarity of the melt-solid interface. The studies carried out on directional solidification of Ga0.77In0.23Sb mixed crystals employing planar melt-solid interface exhibited superior quality than those with nonplanar interfaces. The solutions to certain problems encountered during the synthesis and growth of the compound were discussed. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The steady state throughput performance of distributed applications deployed in switched networks in presence of end-system bottlenecks is studied in this paper. The effect of various limitations at an end-system is modelled as an equivalent transmission capacity limitation. A class of distributed applications is characterised by a static traffic distribution matrix that determines the communication between various components of the application. It is found that uniqueness of steady state throughputs depends only on the traffic distribution matrix and that some applications (e.g., broadcast applications) can yield non-unique values for the steady state component throughputs. For a given switch capacity, with traffic distribution that yield fair unique throughputs, the trade-off between the end-system capacity and the number of application components is brought out. With a proposed distributed rate control, it has been illustrated that it is possible to have unique solution for certain traffic distributions which is otherwise impossible. Also, by proper selection of rate control parameters, various throughput performance objectives can be realised.
Resumo:
In this paper, we propose a new fault-tolerant distributed deadlock detection algorithm which can handle loss of any resource release message. It is based on a token-based distributed mutual exclusion algorithm. We have evaluated and compared the performance of the proposed algorithm with two other algorithms which belong to two different classes, using simulation studies. The proposed algorithm is found to be efficient in terms of average number of messages per wait and average deadlock duration compared to the other two algorithms in all situations, and has comparable or better performance in terms of other parameters.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
Recently, a special class of complex designs called Training-Embedded Complex Orthogonal Designs (TE-CODs) has been introduced to construct single-symbol Maximum Likelihood decodable (SSD) distributed space-time block codes (DSTBCs) for two-hop wireless relay networks using the amplify and forward protocol. However, to implement DSTBCs from square TE-CODs, the overhead due to the transmission of training symbols becomes prohibitively large as the number of relays increase. In this paper, we propose TE-Coordinate Interleaved Orthogonal Designs (TE-CIODs) to construct SSD DSTBCs. Exploiting the block diagonal structure of TE-CIODs, we show that the overhead due to the transmission of training symbols to implement DSTBCs from TE-CIODs is smaller than that for TE-CODs. We also show that DSTBCs from TE-CIODs offer higher rate than those from TE-CODs for identical number of relays while maintaining the SSD and full-diversity properties.
Resumo:
In this paper, we propose a new token-based distributed algorithm for total order atomic broadcast. We have shown that the proposed algorithm requires lesser number of messages compared to the algorithm where broadcast servers use unicasting to send messages to other broadcast servers. The traditional method of broadcasting requires 3(N - 1) messages to broadcast an application message, where N is the number of broadcast servers present in the system. In this algorithm, the maximum number of token messages required to broadcast an application message is 2N. For a heavily loaded system, the average number of token messages required to broadcast an application message reduces to 2, which is a substantial improvement over the traditional broadcasting approach.
Resumo:
We develop an optimal, distributed, and low feedback timer-based selection scheme to enable next generation rate-adaptive wireless systems to exploit multi-user diversity. In our scheme, each user sets a timer depending on its signal to noise ratio (SNR) and transmits a small packet to identify itself when its timer expires. When the SNR-to-timer mapping is monotone non-decreasing, timers of users with better SNRs expire earlier. Thus, the base station (BS) simply selects the first user whose timer expiry it can detect, and transmits data to it at as high a rate as reliably possible. However, timers that expire too close to one another cannot be detected by the BS due to collisions. We characterize in detail the structure of the SNR-to-timer mapping that optimally handles these collisions to maximize the average data rate. We prove that the optimal timer values take only a discrete set of values, and that the rate adaptation policy strongly influences the optimal scheme's structure. The optimal average rate is very close to that of ideal selection in which the BS always selects highest rate user, and is much higher than that of the popular, but ad hoc, timer schemes considered in the literature.
Resumo:
A distributed storage setting is considered where a file of size B is to be stored across n storage nodes. A data collector should be able to reconstruct the entire data by downloading the symbols stored in any k nodes. When a node fails, it is replaced by a new node by downloading data from some of the existing nodes. The amount of download is termed as repair bandwidth. One way to implement such a system is to store one fragment of an (n, k) MDS code in each node, in which case the repair bandwidth is B. Since repair of a failed node consumes network bandwidth, codes reducing repair bandwidth are of great interest. Most of the recent work in this area focuses on reducing the repair bandwidth of a set of k nodes which store the data in uncoded form, while the reduction in the repair bandwidth of the remaining nodes is only marginal. In this paper, we present an explicit code which reduces the repair bandwidth for all the nodes to approximately B/2. To the best of our knowledge, this is the first explicit code which reduces the repair bandwidth of all the nodes for all feasible values of the system parameters.