363 resultados para fracture network
Resumo:
Fracture toughness and fracture mechanisms in Al2O3/Al composites are described. The unique flexibility offered by pressureless infiltration of molten Al alloys into porous alumina preforms was utilized to investigate the effect of microstructural scale and matrix properties on the fracture toughness and the shape of the crack resistance curves (R-curves). The results indicate that the observed increment in toughness is due to crack bridging by intact matrix ligaments behind the crack tip. The deformation behavior of the matrix, which is shown to be dependent on the microstructural constraints, is the key parameter that influences both the steady-state toughness and the shape of the R-curves. Previously proposed models based on crack bridging by intact ductile particles in a ceramic matrix have been modified by the inclusion of an experimentally determined plastic constraint factor (P) that determines the deformation of the ductile phase and are shown to be adequate in predicting the toughness increment in the composites. Micromechanical models to predict the crack tip profile and the bridge lengths (L) correlate well with the observed behavior and indicate that the composites can be classified as (i) short-range toughened and (ii) long-range toughened on the basis of their microstructural characteristics.
Resumo:
In this paper, we outline an approach to the task of designing network codes in a non-multicast setting. Our approach makes use of the concept of interference alignment. As an example, we consider the distributed storage problem where the data is stored across the network in n nodes and where a data collector can recover the data by connecting to any k of the n nodes and where furthermore, upon failure of a node, a new node can replicate the data stored in the failed node while minimizing the repair bandwidth.
Resumo:
Characterizing the functional connectivity between neurons is key for understanding brain function. We recorded spikes and local field potentials (LFPs) from multielectrode arrays implanted in monkey visual cortex to test the hypotheses that spikes generated outward-traveling LFP waves and the strength of functional connectivity depended on stimulus contrast, as described recently. These hypotheses were proposed based on the observation that the latency of the peak negativity of the spike-triggered LFP average (STA) increased with distance between the spike and LFP electrodes, and the magnitude of the STA negativity and the distance over which it was observed decreased with increasing stimulus contrast. Detailed analysis of the shape of the STA, however, revealed contributions from two distinct sources-a transient negativity in the LFP locked to the spike (similar to 0 ms) that attenuated rapidly with distance, and a low-frequency rhythm with peak negativity similar to 25 ms after the spike that attenuated slowly with distance. The overall negative peak of the LFP, which combined both these components, shifted from similar to 0 to similar to 25 ms going from electrodes near the spike to electrodes far from the spike, giving an impression of a traveling wave, although the shift was fully explained by changing contributions from the two fixed components. The low-frequency rhythm was attenuated during stimulus presentations, decreasing the overall magnitude of the STA. These results highlight the importance of accounting for the network activity while using STAs to determine functional connectivity.
Resumo:
An analog minimum-variance unbiased estimator(MVUE) over an asymmetric wireless sensor network is studied.Minimisation of variance is cast into a constrained non-convex optimisation problem. An explicit algorithm that solves the problem is provided. The solution is obtained by decomposing the original problem into a finite number of convex optimisation problems with explicit solutions. These solutions are then juxtaposed together by exploiting further structure in the objective function.
Resumo:
The poor performance of TCP over multi-hop wireless networks is well known. In this paper we explore to what extent network coding can help to improve the throughput performance of TCP controlled bulk transfers over a chain topology multi-hop wireless network. The nodes use a CSMA/ CA mechanism, such as IEEE 802.11’s DCF, to perform distributed packet scheduling. The reverse flowing TCP ACKs are sought to be X-ORed with forward flowing TCP data packets. We find that, without any modification to theMAC protocol, the gain from network coding is negligible. The inherent coordination problem of carrier sensing based random access in multi-hop wireless networks dominates the performance. We provide a theoretical analysis that yields a throughput bound with network coding. We then propose a distributed modification of the IEEE 802.11 DCF, based on tuning the back-off mechanism using a feedback approach. Simulation studies show that the proposed mechanism when combined with network coding, improves the performance of a TCP session by more than 100%.
Resumo:
Building flexible constraint length Viterbi decoders requires us to be able to realize de Bruijn networks of various sizes on the physically provided interconnection network. This paper considers the case when the physical network is itself a de Bruijn network and presents a scalable technique for realizing any n-node de Bruijn network on an N-node de Bruijn network, where n < N. The technique ensures that the length of the longest path realized on the network is minimized and that each physical connection is utilized to send only one data item, both of which are desirable in order to reduce the hardware complexity of the network and to obtain the best possible performance.
Resumo:
Digest caches have been proposed as an effective method tospeed up packet classification in network processors. In this paper, weshow that the presence of a large number of small flows and a few largeflows in the Internet has an adverse impact on the performance of thesedigest caches. In the Internet, a few large flows transfer a majority ofthe packets whereas the contribution of several small flows to the totalnumber of packets transferred is small. In such a scenario, the LRUcache replacement policy, which gives maximum priority to the mostrecently accessed digest, tends to evict digests belonging to the few largeflows. We propose a new cache management algorithm called SaturatingPriority (SP) which aims at improving the performance of digest cachesin network processors by exploiting the disparity between the number offlows and the number of packets transferred. Our experimental resultsdemonstrate that SP performs better than the widely used LRU cachereplacement policy in size constrained caches. Further, we characterizethe misses experienced by flow identifiers in digest caches.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
An attempt has been made to experimentally investigate the fracture process zone (FPZ) using Acoustic Emission (AE) method in High Strength Concrete (HSC) beams subjected to monotonically increasing load. Stress waves are released during the fracture process in materials, which cause acoustic emissions. AE energy released during the fracture of notched HSC beam specimens during Three Point Bend (TPB) tests is measured and is used to investigate the FPZ in the notched HSC beams having 28-day compressive strength of 78.0 MPa. The specimens are tested by Material Testing System (MTS) of 1200 KN capacity employing Crack Mouth Opening Displacement (CMOD) control at the rate of 0.0004 mmlsec in accordance with RILEM recommendations. A brief review on AE technique applied to concrete fracture is presented. The fracture process zone developed and the AE energy released during the fracture process in high strength concrete beam specimens are presented and discussed. It was observed that AE events containing higher energy are located around the notch tip. It may be possible to relate AE energy to fracture energy of concrete.