15 resultados para Best-case scenario
em Indian Institute of Science - Bangalore - Índia
Resumo:
There is huge knowledge gap in our understanding of many terrestrial carbon cycle processes. In this paper, we investigate the bounds on terrestrial carbon uptake over India that arises solely due to CO (2) -fertilization. For this purpose, we use a terrestrial carbon cycle model and consider two extreme scenarios: unlimited CO2-fertilization is allowed for the terrestrial vegetation with CO2 concentration level at 735 ppm in one case, and CO2-fertilization is capped at year 1975 levels for another simulation. Our simulations show that, under equilibrium conditions, modeled carbon stocks in natural potential vegetation increase by 17 Gt-C with unlimited fertilization for CO2 levels and climate change corresponding to the end of 21st century but they decline by 5.5 Gt-C if fertilization is limited at 1975 levels of CO2 concentration. The carbon stock changes are dominated by forests. The area covered by natural potential forests increases by about 36% in the unlimited fertilization case but decreases by 15% in the fertilization-capped case. Thus, the assumption regarding CO2-fertilization has the potential to alter the sign of terrestrial carbon uptake over India. Our model simulations also imply that the maximum potential terrestrial sequestration over India, under equilibrium conditions and best case scenario of unlimited CO2-fertilization, is only 18% of the 21st century SRES A2 scenarios emissions from India. The limited uptake potential of the natural potential vegetation suggests that reduction of CO2 emissions and afforestation programs should be top priorities.
Resumo:
Spectral efficiency is a key characteristic of cellular communications systems, as it quantifies how well the scarce spectrum resource is utilized. It is influenced by the scheduling algorithm as well as the signal and interference statistics, which, in turn, depend on the propagation characteristics. In this paper we derive analytical expressions for the short-term and long-term channel-averaged spectral efficiencies of the round robin, greedy Max-SINR, and proportional fair schedulers, which are popular and cover a wide range of system performance and fairness trade-offs. A unified spectral efficiency analysis is developed to highlight the differences among these schedulers. The analysis is different from previous work in the literature in the following aspects: (i) it does not assume the co-channel interferers to be identically distributed, as is typical in realistic cellular layouts, (ii) it avoids the loose spectral efficiency bounds used in the literature, which only considered the worst case and best case locations of identical co-channel interferers, (iii) it explicitly includes the effect of multi-tier interferers in the cellular layout and uses a more accurate model for handling the total co-channel interference, and (iv) it captures the impact of using small modulation constellation sizes, which are typical of cellular standards. The analytical results are verified using extensive Monte Carlo simulations.
Resumo:
STOAT has been extensively used for the dynamic simulation of an activated sludge based wastewater treatment plant in the Titagarh Sewage Treatment Plant, near Kolkata, India. Some alternative schemes were suggested. Different schemes were compared for the removal of Total Suspended Solids (TSS), b-COD, ammonia, nitrates etc. A combination of IAWQ#1 module with the Takacs module gave best results for the existing scenarios of the Titagarh Sewage Treatment Plant. The modified Bardenpho process was found most effective for reducing the mean b-COD level to as low as 31.4 mg/l, while the mean TSS level was as high as 100.98 mg/l as compared to the mean levels of TSS (92 62 mg/l) and b-COD (92.0 mg/l) in the existing plant. Scheme 2 gave a better scenario for the mean TSS level bringing it down to a mean value of 0.4 mg/l, but a higher mean value for the b-COD level at 54.89 mg/l. The Scheme Final could reduce the mean TSS level to 2.9 mg/l and the mean b-COD level to as low as 38.8 mg/l. The Final Scheme looks to be a technically viable scheme with respect to the overall effluent quality for the plant. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
The function of a protein in a cell often involves coordinated interactions with one or several regulatory partners. It is thus imperative to characterize a protein both in isolation as well as in the context of its complex with an interacting partner. High resolution structural information determined by X-ray crystallography and Nuclear Magnetic Resonance offer the best route to characterize protein complexes. These techniques, however, require highly purified and homogenous protein samples at high concentration. This requirement often presents a major hurdle for structural studies. Here we present a strategy based on co-expression and co-purification to obtain recombinant multi-protein complexes in the quantity and concentration range that can enable hitherto intractable structural projects. The feasibility of this strategy was examined using the sigma factor/anti-sigma factor protein complexes from Mycobacterium tuberculosis. The approach was successful across a wide range of sigma factors and their cognate interacting partners. It thus appears likely that the analysis of these complexes based on variations in expression constructs and procedures for the purification and characterization of these recombinant protein samples would be widely applicable for other multi-protein systems. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
A scheme to apply the rate-1 real orthogonal designs (RODs) in relay networks with single real-symbol decodability of the symbols at the destination for any arbitrary number of relays is proposed. In the case where the relays do not have any information about the channel gains from the source to themselves, the best known distributed space time block codes (DSTBCs) for k relays with single real-symbol decodability offer an overall rate of complex symbols per channel use. The scheme proposed in this paper offers an overall rate of 2/2+k complex symbol per channel use, which is independent of the number of relays. Furthermore, in the scenario where the relays have partial channel information in the form of channel phase knowledge, the best known DSTBCs with single real-symbol decodability offer an overall rate of 1/3 complex symbols per channel use. In this paper, making use of RODs, a scheme which achieves the same overall rate of 1/3 complex symbols per channel use but with a decoding delay that is 50 percent of that of the best known DSTBCs, is presented. Simulation results of the symbol error rate performance for 10 relays, which show the superiority of the proposed scheme over the best known DSTBC for 10 relays with single real-symbol decodability, are provided.
Resumo:
In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.
Resumo:
The compatibility of the fast-tachocline scenario with a flux-transport dynamo model is explored. We employ a flux-transport dynamo model coupled with simple feedback formulae relating the thickness of the tachocline to the amplitude of the magnetic field or to the Maxwell stress. The dynamo model is found to be robust against the nonlinearity introduced by this simplified fast-tachocline mechanism. Solar-like butterfly diagrams are found to persist and, even without any parameter fitting, the overall thickness of the tachocline is well within the range admitted by helioseismic constraints. In the most realistic case of a time-and latitude-dependent tachocline thickness linked to the value of the Maxwell stress, both the thickness and its latitudinal dependence are in excellent agreement with seismic results. In nonparametric models, cycle-related temporal variations in tachocline thickness are somewhat larger than admitted by helioseismic constraints; we find, however, that introducing a further parameter into our feedback formula readily allows further fine tuning of the thickness variations.
Resumo:
The design of modulation schemes for the physical layer network-coded two way relaying scenario is considered with the protocol which employs two phases: Multiple access (MA) Phase and Broadcast (BC) phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of multiple access interference which occurs at the relay during the MA phase. In other words, the set of all possible channel realizations (the complex plane) is quantized into a finite number of regions, with a specific network coding map giving the best performance in a particular region. We obtain such a quantization analytically for the case when M-PSK (for M any power of 2) is the signal set used during the MA phase. We show that the complex plane can be classified into two regions: a region in which any network coding map which satisfies the so called exclusive law gives the same best performance and a region in which the choice of the network coding map affects the performance, which is further quantized based on the choice of the network coding map which optimizes the performance. The quantization thus obtained analytically, leads to the same as the one obtained using computer search for 4-PSK signal set by Koike-Akino et al., for the specific value of M = 4.
Resumo:
We consider the issue of the top quark Yukawa coupling measurement in a model-independent and general case with the inclusion of CP violation in the coupling. Arguably the best process to study this coupling is the associated production of the Higgs boson along with a t (t) over bar pair in a machine like the International Linear Collider (ILC). While detailed analyses of the sensitivity of the measurement-assuming a Standard Model (SM)-like coupling is available in the context of the ILC-conclude that the coupling could be pinned down to about a 10% level with modest luminosity, our investigations show that the scenario could be different in the case of a more general coupling. The modified Lorentz structure resulting in a changed functional dependence of the cross section on the coupling, along with the difference in the cross section itself leads to considerable deviation in the sensitivity. Our studies of the ILC with center-of-mass energies of 500 GeV, 800 GeV, and 1000 GeV show that moderate CP mixing in the Higgs sector could change the sensitivity to about 20%, while it could be worsened to 75% in cases which could accommodate more dramatic changes in the coupling. Detailed considerations of the decay distributions point to a need for a relook at the analysis strategy followed for the case of the SM, such as for a model-independent analysis of the top quark Yukawa coupling measurement. This study strongly suggests that a joint analysis of the CP properties and the Yukawa coupling measurement would be the way forward at the ILC and that caution must be exercised in the measurement of the Yukawa couplings and the conclusions drawn from it.
Resumo:
Layered transition metal dichalcogenides (TMDs), such as MoS2, are candidate materials for next generation 2-D electronic and optoelectronic devices. The ability to grow uniform, crystalline, atomic layers over large areas is the key to developing such technology. We report a chemical vapor deposition (CVD) technique which yields n-layered MoS2 on a variety of substrates. A generic approach suitable to all TMDs, involving thermodynamic modeling to identify the appropriate CVD process window, and quantitative control of the vapor phase supersaturation, is demonstrated. All reactant sources in our method are outside the growth chamber, a significant improvement over vapor-based methods for atomic layers reported to date. The as-deposited layers are p-type, due to Mo deficiency, with field effect and Hall hole mobilities of up to 2.4 cm(2) V-1 s(-1) and 44 cm(2) V-1 s(-1) respectively. These are among the best reported yet for CVD MoS2.
Resumo:
Practical orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE), exploit multi-user diversity using very limited feedback. The best-m feedback scheme is one such limited feedback scheme, in which users report only the gains of their m best subchannels (SCs) and their indices. While the scheme has been extensively studied and adopted in standards such as LTE, an analysis of its throughput for the practically important case in which the SCs are correlated has received less attention. We derive new closed-form expressions for the throughput when the SC gains of a user are uniformly correlated. We analyze the performance of the greedy but unfair frequency-domain scheduler and the fair round-robin scheduler for the general case in which the users see statistically non-identical SCs. An asymptotic analysis is then developed to gain further insights. The analysis and extensive numerical results bring out how correlation reduces throughput.
Resumo:
Rechargeable batteries have been the torchbearer electrochemical energy storage devices empowering small-scale electronic gadgets to large-scale grid storage. Complementing the lithium-ion technology, sodium-ion batteries have emerged as viable economic alternatives in applications unrestricted by volume/weight. What is the best performance limit for new-age Na-ion batteries? This mission has unravelled suites of oxides and polyanionic positive insertion (cathode) compounds in the quest to realize high energy density. Economically and ecologically, iron-based cathodes are ideal for mass-scale dissemination of sodium batteries. This Perspective captures the progress of Fe-containing earth-abundant sodium battery cathodes with two best examples: (i) an oxide system delivering the highest capacity (similar to 200 mA h/g) and (ii) a polyanionic system showing the highest redox potential (3.8 V). Both develop very high energy density with commercial promise for large-scale applications. Here, the structural and electrochemical properties of these two cathodes are compared and contrasted to describe two alternate strategies to achieve the same goal, i.e., improved energy density in Fe-based sodium battery cathodes.