978 resultados para Number-average
Resumo:
The paradox of strength and ductility is now well established and denotes the difficulty of simultaneously achieving both high strength and high ductility. This paradox was critically examined using a cast Al-7% Si alloy processed by high-pressure torsion (HPT) for up to 10 turns at a temperature of either 298 or 445 K. This processing reduces the grain size to a minimum of similar to 0.4 mu m and also decreases the average size of the Si particles. The results show that samples processed to high numbers of HPT turns exhibit both high strength and high ductility when tested at relatively low strain rates and the strain rate sensitivity under these conditions is similar to 0.14 which suggests that flow occurs by some limited grain boundary sliding and crystallographic slip. The results are also displayed on the traditional diagram for strength and ductility and they demonstrate the potential for achieving high strength and high ductility by increasing the number of turns in HPT.
Resumo:
Cobalt ferrite nanoparticles with average sizes of 14, 9 and 6 nm were synthesised by the chemical co-precipitation technique. Average particle sizes were varied by changing the chitosan surfactant to precursor molar ratio in the reaction mixture. Transmission electron microscopy images revealed a faceted and irregular morphology for the as-synthesised nanoparticles. Magnetic measurements revealed a ferromagnetic nature for the 14 and 9 nm particles and a superparamagnetic nature for the 6 nm particles. An increase in saturation magnetisation with increasing particle size was noted. Relaxivity measurements were carried out to determine T-2 value as a function of particle size using nuclear magnetic resonance measurements. The relaxivity coefficient increased with decrease in particle size and decrease in the saturation magnetisation value. The observed trend in the change of relaxivity value with particle size was attributed to the faceted nature of as-synthesised nanoparticles. Faceted morphology results in the creation of high gradient of magnetic field in the regions adjacent to the facet edges increasing the relaxivity value. The effect of edges in increasing the relaxivity value increases with decrease in the particle size because of an increase in the total number of edges per particle dispersion.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Significant changes are reported in extreme rainfall characteristics over India in recent studies though there are disagreements on the spatial uniformity and causes of trends. Based on recent theoretical advancements in the Extreme Value Theory (EVT), we analyze changes in extreme rainfall characteristics over India using a high-resolution daily gridded (1 degrees latitude x 1 degrees longitude) dataset. Intensity, duration and frequency of excess rain over a high threshold in the summer monsoon season are modeled by non-stationary distributions whose parameters vary with physical covariates like the El-Nino Southern Oscillation index (ENSO-index) which is an indicator of large-scale natural variability, global average temperature which is an indicator of human-induced global warming and local mean temperatures which possibly indicate more localized changes. Each non-stationary model considers one physical covariate and the best chosen statistical model at each rainfall grid gives the most significant physical driver for each extreme rainfall characteristic at that grid. Intensity, duration and frequency of extreme rainfall exhibit non-stationarity due to different drivers and no spatially uniform pattern is observed in the changes in them across the country. At most of the locations, duration of extreme rainfall spells is found to be stationary, while non-stationary associations between intensity and frequency and local changes in temperature are detected at a large number of locations. This study presents the first application of nonstationary statistical modeling of intensity, duration and frequency of extreme rainfall over India. The developed models are further used for rainfall frequency analysis to show changes in the 100-year extreme rainfall event. Our findings indicate the varying nature of each extreme rainfall characteristic and their drivers and emphasize the necessity of a comprehensive framework to assess resulting risks of precipitation induced flooding. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We derive analytical expressions for probability distribution function (PDF) for electron transport in a simple model of quantum junction in presence of thermal fluctuations. Our approach is based on the large deviation theory combined with the generating function method. For large number of electrons transferred, the PDF is found to decay exponentially in the tails with different rates due to applied bias. This asymmetry in the PDF is related to the fluctuation theorem. Statistics of fluctuations are analyzed in terms of the Fano factor. Thermal fluctuations play a quantitative role in determining the statistics of electron transfer; they tend to suppress the average current while enhancing the fluctuations in particle transfer. This gives rise to both bunching and antibunching phenomena as determined by the Fano factor. The thermal fluctuations and shot noise compete with each other and determine the net (effective) statistics of particle transfer. Exact analytical expression is obtained for delay time distribution. The optimal values of the delay time between successive electron transfers can be lowered below the corresponding shot noise values by tuning the thermal effects. (C) 2015 AIP Publishing LLC.
Quick, Decentralized, Energy-Efficient One-Shot Max Function Computation Using Timer-Based Selection
Resumo:
In several wireless sensor networks, it is of interest to determine the maximum of the sensor readings and identify the sensor responsible for it. We propose a novel, decentralized, scalable, energy-efficient, timer-based, one-shot max function computation (TMC) algorithm. In it, the sensor nodes do not transmit their readings in a centrally pre-defined sequence. Instead, the nodes are grouped into clusters, and computation occurs over two contention stages. First, the nodes in each cluster contend with each other using the timer scheme to transmit their reading to their cluster-heads. Thereafter, the cluster-heads use the timer scheme to transmit the highest sensor reading in their cluster to the fusion node. One new challenge is that the use of the timer scheme leads to collisions, which can make the algorithm fail. We optimize the algorithm to minimize the average time required to determine the maximum subject to a constraint on the probability that it fails to find the maximum. TMC significantly lowers average function computation time, average number of transmissions, and average energy consumption compared to approaches proposed in the literature.
Resumo:
Small covers were introduced by Davis and Januszkiewicz in 1991. We introduce the notion of equilibrium triangulations for small covers. We study equilibrium and vertex minimal 4-equivariant triangulations of 2-dimensional small covers. We discuss vertex minimal equilibrium triangulations of RP3#RP3, S-1 x RP2 and a nontrivial S-1 bundle over RP2. We construct some nice equilibrium triangulations of the real projective space RPn with 2(n) + n 1 vertices. The main tool is the theory of small covers.
Resumo:
Contemporary cellular standards, such as Long Term Evolution (LTE) and LTE-Advanced, employ orthogonal frequency-division multiplexing (OFDM) and use frequency-domain scheduling and rate adaptation. In conjunction with feedback reduction schemes, high downlink spectral efficiencies are achieved while limiting the uplink feedback overhead. One such important scheme that has been adopted by these standards is best-m feedback, in which every user feeds back its m largest subchannel (SC) power gains and their corresponding indices. We analyze the single cell average throughput of an OFDM system with uniformly correlated SC gains that employs best-m feedback and discrete rate adaptation. Our model incorporates three schedulers that cover a wide range of the throughput versus fairness tradeoff and feedback delay. We show that, for small m, correlation significantly reduces average throughput with best-m feedback. This result is pertinent as even in typical dispersive channels, correlation is high. We observe that the schedulers exhibit varied sensitivities to correlation and feedback delay. The analysis also leads to insightful expressions for the average throughput in the asymptotic regime of a large number of users.
Resumo:
In this paper, the design of a new solar operated adsorption cooling system with two identical small and one large adsorber beds, which is capable of producing cold continuously, has been proposed. In this system, cold energy is stored in the form of refrigerant in a separate refrigerant storage tank at ambient temperature. Silica gel water is used as a working pair and system is driven by solar energy. The operating principle is described in details and its thermodynamic transient analysis is presented. Effect of COP and SCE for different adsorbent mass and adsorption/desorption time of smaller beds are discussed. Recommended mass and number of cycles of operation for smaller beds to attain continuous cooling with average COP and SCE of 0.63 and 337.5 kJ/kg, respectively are also discussed, at a generation, condenser and evaporator temperatures of 368 K, 303 K and 283 K, respectively. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The behaviour of turbulent Prandtl/Schmidt number is explored through the model-free simulation results. It has been observed that compressibility affects the Reynolds scalar flux vectors. Reduced peak values are also observed for compressible convective Mach number mixing layer as compared with the incompressible convective Mach number counterpart, indicating a reduction in the mixing of enthalpy and species. Pr-t and Sc-t variations also indicate a reduction in mixing. It is observed that unlike the incompressible case, it is difficult to assign a constant value to these numbers due to their continuous variation in space. Modelling of Pr-t and Sc-t would be necessary to cater for this continuous spatial variation. However, the turbulent Lewis number is evaluated to be near unity for the compressible case, making it necessary to model only one of the Pr-t and Sc-t..
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between similar to 40,000 and similar to 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of similar to 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of similar to 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
Resumo:
Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.
Resumo:
The fluctuations exhibited by the cross sections generated in a compound-nucleus reaction or, more generally, in a quantum-chaotic scattering process, when varying the excitation energy or another external parameter, are characterized by the width Gamma(corr) of the cross-section correlation function. Brink and Stephen Phys. Lett. 5, 77 (1963)] proposed a method for its determination by simply counting the number of maxima featured by the cross sections as a function of the parameter under consideration. They stated that the product of the average number of maxima per unit energy range and Gamma(corr) is constant in the Ercison region of strongly overlapping resonances. We use the analogy between the scattering formalism for compound-nucleus reactions and for microwave resonators to test this method experimentally with unprecedented accuracy using large data sets and propose an analytical description for the regions of isolated and overlapping resonances.
Resumo:
Cooperative relaying combined with selection exploits spatial diversity to significantly improve the performance of interference-constrained secondary users in an underlay cognitive radio (CR) network. However, unlike conventional relaying, the state of the links between the relay and the primary receiver affects the choice of the relay. Further, while the optimal amplify-and-forward (AF) relay selection rule for underlay CR is well understood for the peak interference-constraint, this is not so for the less conservative average interference constraint. For the latter, we present three novel AF relay selection (RS) rules, namely, symbol error probability (SEP)-optimal, inverse-of-affine (IOA), and linear rules. We analyze the SEPs of the IOA and linear rules and also develop a novel, accurate approximation technique for analyzing the performance of AF relays. Extensive numerical results show that all the three rules outperform several RS rules proposed in the literature and generalize the conventional AF RS rule.