972 resultados para Contractive constraint
Resumo:
We report a combined experimental and computational study of a low constraint aluminum single crystal fracture geometry and investigate the near-tip stress and strain fields. To this end, a single edge notched tensile (SENT) specimen is considered. A notch, with a radius of 50 µm, is taken to lie in the (010) plane and its front is aligned along the [101] direction. Experiments are conducted by subjecting the specimen to tensile loading using a special fixture inside a scanning electron microscope chamber. Both SEM micrographs and electron back-scattered diffraction (EBSD) maps are obtained from the near-tip region. The experiments are complemented by performing 3D and 2D plane strain finite element simulations within a continuum crystal plasticity framework assuming an isotropic hardening response characterized by the Pierce–Asaro–Needleman model. The simulations show a distinct slip band forming at about 55 deg with respect to the notch line corresponding to slip on (11-bar 1)[011] system, which corroborates well with experimental data. Furthermore, two kink bands occur at about 45 deg and 90 deg with respect to the notch line within which large rotations in the crystal orientation take place. These predictions are in good agreement with the EBSD observations. Finally, the near-tip angular variations of the 3D stress and plastic strain fields in the low constraint SENT fracture geometry are examined in detail.
Resumo:
We consider single-source, single-sink (ss-ss) multi-hop relay networks, with slow-fading Rayleigh links. This two part paper aims at giving explicit protocols and codes to achieve the optimal diversity-multiplexing tradeoff (DMT) of two classes of multi-hop networks: K-parallel-path (KPP) networks and Layered networks. While single-antenna KPP networks were the focus of the first part, we consider layered and multi-antenna networks in this second part. We prove that a linear DMT between the maximum diversity d(max). and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks under the half-duplex constraint. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4. For the multiple-antenna case, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks. Along the way, we compute the DMT of parallel MIMO channels in terms of the DMT of the component channel. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable using an amplify-and-forward (AF) protocol. Explicit short-block-length codes are provided for all the proposed protocols. Two key implications of the results in the two-part paper are that the half-duplex constraint does not necessarily entail rate loss by a factor of two as previously believed and that simple AN protocols are often sufficient to attain the best possible DMT.
Resumo:
We consider single-source, single-sink multi-hop relay networks, with slow-fading Rayleigh fading links and single-antenna relay nodes operating under the half-duplex constraint. While two hop relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this two-part paper, we identify two families of networks that are multi-hop generalizations of the two hop network: K-Parallel-Path (KPP) networks and Layered networks. In the first part, we initially consider KPP networks, which can be viewed as the union of K node-disjoint parallel paths, each of length > 1. The results are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the optimal DMT of KPP(D) networks with K >= 4, and KPP(I) networks with K >= 3. Along the way, we derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. As a special case, the DMT of two-hop relay network without direct link is obtained. Two key implications of the results in the two-part paper are that the half-duplex constraint does not necessarily entail rate loss by a factor of two, as previously believed and that, simple AF protocols are often sufficient to attain the best possible DMT.
Resumo:
We deal with a single conservation law with discontinuous convex-concave type fluxes which arise while considering sign changing flux coefficients. The main difficulty is that a weak solution may not exist as the Rankine-Hugoniot condition at the interface may not be satisfied for certain choice of the initial data. We develop the concept of generalized entropy solutions for such equations by replacing the Rankine-Hugoniot condition by a generalized Rankine-Hugoniot condition. The uniqueness of solutions is shown by proving that the generalized entropy solutions form a contractive semi-group in L-1. Existence follows by showing that a Godunov type finite difference scheme converges to the generalized entropy solution. The scheme is based on solutions of the associated Riemann problem and is neither consistent nor conservative. The analysis developed here enables to treat the cases of fluxes having at most one extrema in the domain of definition completely. Numerical results reporting the performance of the scheme are presented. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average power ratio (PAPR) and avoid the problem of turning off antennas. But CODs for 2(a) antennas with a + 1 complex variables, with no zero entries are not known in the literature for a >= 4. In this paper, a method of obtaining no zero entry (NZE) codes, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. This is achieved with slight increase in the ML decoding complexity for regular QAM constellations and no increase for other complex constellations. Since NZE CODs have been constructed recently for 8 antennas our method leads to NZE CPODs for 16 antennas. Moreover, starting from certain NZE CPODs for n antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulations results show that bit error performance of our codes under average power constraint is same as that of the CODs and superior to CODs under peak power constraint.
Resumo:
We study the problem of decentralized sequential change detection with conditionally independent observations. The sensors form a star topology with a central node called fusion center as the hub. The sensors transmit a simple function of their observations in an analog fashion over a wireless Gaussian multiple access channel and operate under either a power constraint or an energy constraint. Simulations demonstrate that the proposed techniques have lower detection delays when compared with existing schemes. Moreover we demonstrate that the energy-constrained formulation enables better use of the total available energy than a power-constrained formulation.
Resumo:
We investigate the dynamics of peeling of an adhesive tape subjected to a constant pull speed. Due to the constraint between the pull force, peel angle and the peel force, the equations of motion derived earlier fall into the category of differential-algebraic equations (DAE) requiring an appropriate algorithm for its numerical solution. By including the kinetic energy arising from the stretched part of the tape in the Lagrangian, we derive equations of motion that support stick-slip jumps as a natural consequence of the inherent dynamics itself, thus circumventing the need to use any special algorithm. In the low mass limit, these equations reproduce solutions obtained using a differential-algebraic algorithm introduced for the earlier singular equations. We find that mass has a strong influence on the dynamics of the model rendering periodic solutions to chaotic and vice versa. Apart from the rich dynamics, the model reproduces several qualitative features of the different waveforms of the peel force function as also the decreasing nature of force drop magnitudes.
Resumo:
In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates the predicted error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. In quantization phase, we used a modified SPIHT algorithm to achieve efficiency in memory requirements. The memory constraint plays a vital role in wireless and bandwidth-limited applications. A single reusable list is used instead of three continuously growing linked lists as in case of SPIHT. This method is error resilient. The performance is measured in terms of PSNR and memory requirements. The algorithm shows good compression performance and significant savings in memory. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.
Resumo:
In this paper, new results and insights are derived for the performance of multiple-input, single-output systems with beamforming at the transmitter, when the channel state information is quantized and sent to the transmitter over a noisy feedback channel. It is assumed that there exists a per-antenna power constraint at the transmitter, hence, the equal gain transmission (EGT) beamforming vector is quantized and sent from the receiver to the transmitter. The loss in received signal-to-noise ratio (SNR) relative to perfect beamforming is analytically characterized, and it is shown that at high rates, the overall distortion can be expressed as the sum of the quantization-induced distortion and the channel error-induced distortion, and that the asymptotic performance depends on the error-rate behavior of the noisy feedback channel as the number of codepoints gets large. The optimum density of codepoints (also known as the point density) that minimizes the overall distortion subject to a boundedness constraint is shown to be the same as the point density for a noiseless feedback channel, i.e., the uniform density. The binary symmetric channel with random index assignment is a special case of the analysis, and it is shown that as the number of quantized bits gets large the distortion approaches the same as that obtained with random beamforming. The accuracy of the theoretical expressions obtained are verified through Monte Carlo simulations.
Resumo:
In modern wireline and wireless communication systems, Viterbi decoder is one of the most compute intensive and essential elements. Each standard requires a different configuration of Viterbi decoder. Hence there is a need to design a flexible reconfigurable Viterbi decoder to support different configurations on a single platform. In this paper we present a reconfigurable Viterbi decoder which can be reconfigured for standards such as WCDMA, CDMA2000, IEEE 802.11, DAB, DVB, and GSM. Different parameters like code rate, constraint length, polynomials and truncation length can be configured to map any of the above mentioned standards. Our design provides higher throughput and scalable power consumption in various configuration of the reconfigurable Viterbi decoder. The power and throughput can also be optimized for different standards.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
The partition of unity finite element method for elastic wave propagation in Reissner-Mindlin plates
Resumo:
This paper reports a numerical method for modelling the elastic wave propagation in plates. The method is based on the partition of unity approach, in which the approximate spectral properties of the infinite dimensional system are embedded within the space of a conventional finite element method through a consistent technique of waveform enrichment. The technique is general, such that it can be applied to the Lagrangian family of finite elements with specific waveform enrichment schemes, depending on the dominant modes of wave propagation in the physical system. A four-noded element for the Reissner-indlin plate is derived in this paper, which is free of shear locking. Such a locking-free property is achieved by removing the transverse displacement degrees of freedom from the element nodal variables and by recovering the same through a line integral and a weak constraint in the frequency domain. As a result, the frequency-dependent stiffness matrix and the mass matrix are obtained, which capture the higher frequency response with even coarse meshes, accurately. The steps involved in the numerical implementation of such element are discussed in details. Numerical studies on the performance of the proposed element are reported by considering a number of cases, which show very good accuracy and low computational cost. Copyright (C)006 John Wiley & Sons, Ltd.
Resumo:
In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper, we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal share of service that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called Generous TIT-FOR-TAT (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.