996 resultados para fracture network
Resumo:
Hardened concrete is a three-phase composite consisting of cement paste, aggregate and interface between cement paste and aggregate. The interface in concrete plays a key role on the overall performance of concrete. The interface properties such as deformation, strength, fracture energy, stress intensity and its influence on stiffness and ductility of concrete have been investigated. The effect of composition of cement, surface characteristics of aggregate and type of loading have been studied. The load-deflection response is linear showing that the linear elastic fracture mechanics (LEFM) is applicable to characterize interface. The crack deformation increases with large rough aggregate surfaces. The strength of interface increases with the richness of concrete mix. The interface fracture energy increases as the roughness of the aggregate surface increases. The interface energy under mode II loading increases with the orientation of aggregate surface with the direction of loading. The chemical reaction between smooth aggregate surface and the cement paste seems to improve the interface energy. The ductility of concrete decreases as the surface area of the strong interface increases. The fracture toughness (stress intensity factor) of the interface seems to be very low, compared with hardened cement paste, mortar and concrete.
Resumo:
A neural network has been used to predict the flow intermittency from velocity signals in the transition zone in a boundary layer. Unlike many of the available intermittency detection methods requiring a proper threshold choice in order to distinguish between the turbulent and non-turbulent parts of a signal, a trained neural network does not involve any threshold decision. The intermittency prediction based on the neural network has been found to be very satisfactory.
Resumo:
Representatives of several Internet access providers have expressed their wish to see a substantial change in the pricing policies of the Internet. In particular, they would like to see content providers pay for use of the network, given the large amount of resources they use. This would be in clear violation of the �network neutrality� principle that had characterized the development of the wireline Internet. Our first goal in this paper is to propose and study possible ways of implementing such payments and of regulating their amount. We introduce a model that includes the internaut�s behavior, the utilities of the ISP and of the content providers, and the monetary flow that involves the internauts, the ISP and content provider, and in particular, the content provider�s revenues from advertisements. We consider various game models and study the resulting equilibrium; they are all combinations of a noncooperative game (in which the service and content providers determine how much they will charge the internauts) with a cooperative one - the content provider and the service provider bargain with each other over payments to one another. We include in our model a possible asymmetric bargaining power which is represented by a parameter (that varies between zero to one). We then extend our model to study the case of several content providers. We also provide a very brief study of the equilibria that arise when one of the content providers enters into an exclusive contract with the ISP.
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
The integration of different wireless networks, such as GSM and WiFi, as a two-tier hybrid wireless network is more popular and economical. Efficient bandwidth management, call admission control strategies and mobility management are important issues in supporting multiple types of services with different bandwidth requirements in hybrid networks. In particular, bandwidth is a critical commodity because of the type of transactions supported by these hybrid networks, which may have varying bandwidth and time requirements. In this paper, we consider such a problem in a hybrid wireless network installed in a superstore environment and design a bandwidth management algorithm based on the priority level, classification of the incoming transactions. Our scheme uses a downlink transaction scheduling algorithm, which decides how to schedule the outgoing transactions based on their priority level with efficient use of available bandwidth. The transaction scheduling algorithm is used to maximize the number of transaction-executions. The proposed scheme is simulated in a superstore environment with multi Rooms. The performance results describe that the proposed scheme can considerably improve the bandwidth utilization by reducing transaction blocking and accommodating more essential transactions at the peak time of the business.
Resumo:
Syntactic foams made by mechanical mixing of polymeric binder and hollow spherical particles are used as core materials in sandwich structured materials. Low density of such materials makes them suitable for weight sensitive applications. The present study correlates various postcompression microscopic observations in syntactic foams to the localized events leading the material to fracture. Depending upon local stress conditions the fracture features of syntactic foam are identified for various modes of fracture such as compressive, shear and tensile. Microscopic observations were also taken at sandwich structures containing syntactic foam as core materials and also at reinforced syntactic foam containing glass fibers. These observations provide conclusive evidences for the fracture features generated under different failure modes. All the microscopic observations were taken using scanning electron microscope in secondary electron mode. (C) 2002 Kluwer Academic Publishers.
Resumo:
An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.
Resumo:
A decapeptide Boc-L-Ala-(DeltaPhe)(4)-L-Ala-(DeltaPhe)(3)-Gly-OMe (Peptide I) was synthesized to study the preferred screw sense of consecutive alpha,beta-dehydrophenylalanine (DeltaPhe) residues. Crystallographic and CD studies suggest that, despite the presence of two L-Ala residues in the sequence, the decapeptide does not have a preferred screw sense. The peptide crystallizes with two conformers per asymmetric unit, one of them a slightly distorted right-handed 3(10)-helix (X) and the other a left-handed 3(10)-helix (Y) with X and Y being antiparallel to each other. An unanticipated and interesting observation is that in the solid state, the two shape-complement molecules self-assemble and interact with an extensive network of C-H...O hydrogen bonds and pi-pi interactions, directed laterally to the helix axis with amazing regularity. Here, we present an atomic resolution picture of the weak interaction mediated mutual recognition of two secondary structural elements and its possible implication in understanding the specific folding of the hydrophobic core of globular proteins and exploitation in future work on de novo design.
Resumo:
An experimental investigation on the fracture properties of high-strength concrete (HSC) is reported. Three-point bend beam specimens of size 100 x 100 x 500 mm were used as per RILEM-FMC 50 recommendations. The influence of maximum size of coarse aggregate on fracture energy, fracture toughness, and characteristic length of concrete has been studied. The compressive strength of concrete ranged between 40 and 75 MPa. Relatively brittle fracture behavior was observed with the increase in compressive strength. The load-CMOD relationship is linear in the ascending portion and gradually drops off after the peak value in the descending portion. The length of the tail end portion of the softening curve increases as the size of coarse aggregate increases. The fracture energy increases as the maximum size of coarse aggregate and compressive strength of concrete increase. The characteristic length of concrete increases with the maximum size of coarse aggregate and decreases as the compressive strength increases, (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Neural network models of associative memory exhibit a large number of spurious attractors of the network dynamics which are not correlated with any memory state. These spurious attractors, analogous to "glassy" local minima of the energy or free energy of a system of particles, degrade the performance of the network by trapping trajectories starting from states that are not close to one of the memory states. Different methods for reducing the adverse effects of spurious attractors are examined with emphasis on the role of synaptic asymmetry. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
This paper presents the capability of the neural networks as a computational tool for solving constrained optimization problem, arising in routing algorithms for the present day communication networks. The application of neural networks in the optimum routing problem, in case of packet switched computer networks, where the goal is to minimize the average delays in the communication have been addressed. The effectiveness of neural network is shown by the results of simulation of a neural design to solve the shortest path problem. Simulation model of neural network is shown to be utilized in an optimum routing algorithm known as flow deviation algorithm. It is also shown that the model will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents a prototype of a fuzzy system for alleviation of network overloads in the day-to-day operation of power systems. The control used for overload alleviation is real power generation rescheduling. Generation Shift Sensitivity Factors (GSSF) are computed accurately, using a more realistic operational load flow model. Overloading of lines and sensitivity of controlling variables are translated into fuzzy set notations to formulate the relation between overloading of line and controlling ability of generation scheduling. A fuzzy rule based system is formed to select the controllers, their movement direction and step size. Overall sensitivity of line loading to each of the generation is also considered in selecting the controller. Results obtained for network overload alleviation of two modified Indian power networks of 24 bus and 82 bus with line outage contingencies are presented for illustration purposes.