959 resultados para componentwise ultimate bounds
Resumo:
A rigorous lower bound solution, with the usage of the finite elements limit analysis, has been obtained for finding the ultimate bearing capacity of two interfering strip footings placed on a sandy medium. Smooth as well as rough footingsoil interfaces are considered in the analysis. The failure load for an interfering footing becomes always greater than that for a single isolated footing. The effect of the interference on the failure load (i) for rough footings becomes greater than that for smooth footings, (ii) increases with an increase in phi, and (iii) becomes almost negligible beyond S/B>3. Compared with various theoretical and experimental results reported in literature, the present analysis generally provides the lowest magnitude of the collapse load. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
In this study tensile properties of consolidated magnesium chips obtained from solid state re-cycling (SSR) has been examined and correlated with the microstructure. Chips machined from as-cast billet of pure magnesium were consolidated through SSR technique, comprising of compaction at ambient conditions followed by hot extrusion at four different temperatures viz., 250, 300, 350 and 400 degrees C. The extruded rods were characterized for microstructure and their room temperature tensile properties. Both ultimate tensile strength and 0.2% proof stress of these consolidated materials are higher by 15-35% compared to reference material (as cast and extruded). Further these materials obey Hall-Petch relation with respect to strength dependence of grain size. Strain hardening behavior, measured in terms of hardening exponent, hardening capacity and hardening rate was found to be distinctly different in chip consolidated material compared to reference material. Strength asymmetry, measured as a ratio of compressive proof stress to tensile proof stress was higher in chip consolidated material. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Cryosorption pump is the only solution for pumping helium and hydrogen in fusion reactors. It is chosen because it offers highest pumping speed as well as the only suitable pump for the harsh environments in a tokamak. Towards the development of such cryosorption pumps, the optimal choice of the right activated carbon panels is essential. In order to characterize the performance of the panels with indigenously developed activated carbon, a cryocooler based cryosorption pump with scaled down sizes of panels is experimented. The results are compared with the commercial cryopanel used in a CTI cryosorption (model: Cryotorr 7) pump. The cryopanel is mounted on the cold head of the second stage GM cryocooler which cools the cryopanel down to 11K with first stage reaching about similar to 50K. With no heat load, cryopump gives the ultimate vacuum of 2.1E-7 mbar. The pumping speed of different gases such as nitrogen, argon, hydrogen, helium are tested both on indigenous and commercial cryopanel. These studies serve as a bench mark towards the development of better cryopanels to be cooled by liquid helium for use with tokamak.
Resumo:
A decode and forward protocol based Trellis Coded Modulation (TCM) scheme for the half-duplex relay channel, in a Rayleigh fading environment, is presented. The proposed scheme can achieve any spectral efficiency greater than or equal to one bit per channel use (bpcu). A near-ML decoder for the suggested TCM scheme is proposed. It is shown that the high Signal to Noise Ratio (SNR) performance of this near-ML decoder approaches the performance of the optimal ML decoder. Based on the derived Pair-wise Error Probability (PEP) bounds, design criteria to maximize the diversity and coding gains are obtained. Simulation results show a large gain in SNR for the proposed TCM scheme over uncoded communication as well as the direct transmission without the relay.
Resumo:
Amplify-and-forward (AF) relay based cooperation has been investigated in the literature given its simplicity and practicality. Two models for AF, namely, fixed gain and fixed power relaying, have been extensively studied. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay (SR) channel gain. In fixed power relaying, the relay's instantaneous transmit power is fixed, but its gain varies. We propose a general AF cooperation model in which an average transmit power constrained relay jointly adapts its gain and transmit power as a function of the channel gains. We derive the optimal AF gain policy that minimizes the fading- averaged symbol error probability (SEP) of MPSK and present insightful and tractable lower and upper bounds for it. We then analyze the SEP of the optimal policy. Our results show that the optimal scheme is up to 39.7% and 47.5% more energy-efficient than fixed power relaying and fixed gain relaying, respectively. Further, the weaker the direct source-destination link, the greater are the energy-efficiency gains.
Resumo:
A pairwise independent network (PIN) model consists of pairwise secret keys (SKs) distributed among m terminals. The goal is to generate, through public communication among the terminals, a group SK that is information-theoretically secure from an eavesdropper. In this paper, we study the Harary graph PIN model, which has useful fault-tolerant properties. We derive the exact SK capacity for a regular Harary graph PIN model. Lower and upper bounds on the fault-tolerant SK capacity of the Harary graph PIN model are also derived.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.
Resumo:
Fastest curve-fitting procedures are proposed for vertical and radial consolidations for rapid loading methods. In vertical consolidation, the next load increment can be applied at 50-60% consolidation (or even earlier if the compression index is known). In radial consolidation, the next load increment can be applied at just 10-15% consolidation. The effects of secondary consolidation on the coefficient of consolidation and ultimate settlement are minimized in both cases. A quick procedure is proposed in vertical consolidation that determines how far is calculated from the true , where is coefficient of consolidation. In radial consolidation no such procedure is required because at 10-15% the consolidation effects of secondary consolidation are already less in most inorganic soils. The proposed rapid loading methods can be used when the settlement or time of load increment is not known. The characteristic features of vertical, radial, three-dimensional, and secondary consolidations are given in terms of the rate of settlement. A relationship is proposed between the coefficient of the vertical consolidation, load increment ratio, and compression index. (C) 2013 American Society of Civil Engineers.
Resumo:
The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).
Resumo:
We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting. We then introduce the notion of \emph{classification calibration dimension} of a multiclass loss matrix, which measures the smallest `size' of a prediction space for which it is possible to design a convex surrogate that is classification calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, as one application, we provide a different route from the recent result of Duchi et al.\ (2010) for analyzing the difficulty of designing `low-dimensional' convex surrogates that are consistent with respect to pairwise subset ranking losses. We anticipate the classification calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.
Resumo:
Maximum likelihood (ML) algorithms, for the joint estimation of synchronisation impairments and channel in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) system, are investigated in this work. A system model that takes into account the effects of carrier frequency offset, sampling frequency offset, symbol timing error and channel impulse response is formulated. Cramer-Rao lower bounds for the estimation of continuous parameters are derived, which show the coupling effect among different impairments and the significance of the joint estimation. The authors propose an ML algorithm for the estimation of synchronisation impairments and channel together, using the grid search method. To reduce the complexity of the joint grid search in the ML algorithm, a modified ML (MML) algorithm with multiple one-dimensional searches is also proposed. Further, a stage-wise ML (SML) algorithm using existing algorithms, which estimate less number of parameters, is also proposed. Performance of the estimation algorithms is studied through numerical simulations and it is found that the proposed ML and MML algorithms exhibit better performance than SML algorithm.
Resumo:
The ultimate bearing capacity of strip foundations in the presence of inclined groundwater flow, considering both upward and downward flow directions, has been determined by using the lower bound finite-element limit analysis. A numerical solution has been generated for both smooth and rough footings placed on frictional soils. A correction factor (f gamma), which needs to be multiplied with the N gamma-term, has been computed to account for groundwater seepage. The variation of f gamma has been obtained as a function of the hydraulic gradient (i) for various inclinations of groundwater flow. For a given magnitude of i, there exists a certain critical inclination of the flow for which the value of f gamma is minimized. With an upward flow, for all flow inclinations, the magnitude of f gamma always reduces with an increase in the value of i. An example has also been provided to illustrate the application of the obtained results when designing foundations in the presence of groundwater seepage.
Resumo:
Energy harvesting sensor (EHS) nodes provide an attractive and green solution to the problem of limited lifetime of wireless sensor networks (WSNs). Unlike a conventional node that uses a non-rechargeable battery and dies once it runs out of energy, an EHS node can harvest energy from the environment and replenish its rechargeable battery. We consider hybrid WSNs that comprise of both EHS and conventional nodes; these arise when legacy WSNs are upgraded or due to EHS deployment cost issues. We compare conventional and hybrid WSNs on the basis of a new and insightful performance metric called k-outage duration, which captures the inability of the nodes to transmit data either due to lack of sufficient battery energy or wireless fading. The metric overcomes the problem of defining lifetime in networks with EHS nodes, which never die but are occasionally unable to transmit due to lack of sufficient battery energy. It also accounts for the effect of wireless channel fading on the ability of the WSN to transmit data. We develop two novel, tight, and computationally simple bounds for evaluating the k-outage duration. Our results show that increasing the number of EHS nodes has a markedly different effect on the k-outage duration than increasing the number of conventional nodes.
Resumo:
In the present study, high strength bulk ultrafine-grained titanium alloy Ti-6Al-4V bars were successfully processed using multi-pass warm rolling. Ti-6Al-4V bars of 12 mm diameter and several metres long were processed by multi-pass warm rolling at 650 degrees C, 700 degrees C and 750 degrees C. The highest achieved mechanical properties for Ti-6Al-4V in as rolled condition were yield strength 1191 MPa, ultimate tensile strength of 1299 MPa having an elongation of 10% when the rolling temperature was 650 degrees C. The concurrent evolution of microstructure and texture has been studied using optical microscopy, electron back scattered diffraction and x-ray diffraction. The significant improvement in mechanical properties has been attributed to the ultrafine-grained microstructure as well as the morphology of alpha and beta phases in the warm rolled specimens. The warm rolling of Ti-6Al-4V leads to formation of < 10 (1) over bar0 >alpha//RD fibre texture. This study shows that multi-pass warm rolling has potential to eliminate the costly and time consuming heat treatment steps for small diameter bar products, as the solution treated and aged (STA) properties are achievable in the as rolled condition itself. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We study the diversity order vs rate of an additive white Gaussian noise (AWGN) channel in the whole capacity region. We show that for discrete input as well as for continuous input, Gallager's upper bounds on error probability have exponential diversity in low and high rate region but only subexponential in the mid-rate region. For the best available lower bounds and for the practical codes one observes exponential diversity throughout the capacity region. However we also show that performance of practical codes is close to Gallager's upper bounds and the mid-rate subexponential diversity has a bearing on the performance of the practical codes. Finally we show that the upper bounds with Gaussian input provide good approximation throughout the capacity region even for finite constellation.