48 resultados para KYOTO protocol
em Indian Institute of Science - Bangalore - Índia
Resumo:
The Clean Development Mechanism (CDM), Article 12 of the Kyoto Protocol allows Afforestation and Reforestation (A/R) projects as mitigation activities to offset the CO2 in the atmosphere whilst simultaneously seeking to ensure sustainable development for the host country. The Kyoto Protocol was ratified by the Government of India in August 2002 and one of India's objectives in acceding to the Protocol was to fulfil the prerequisites for implementation of projects under the CDM in accordance with national sustainable priorities. The objective of this paper is to assess the effectiveness of using large-scale forestry projects under the CDM in achieving its twin goals using Karnataka State as a case study. The Generalized Comprehensive Mitigation Assessment Process (GCOMAP) Model is used to observe the effect of varying carbon prices on the land available for A/R projects. The model is coupled with outputs from the Lund-Potsdam-Jena (LPJ) Dynamic Global Vegetation Model to incorporate the impacts of temperature rise due to climate change under the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) A2, A1B and B1. With rising temperatures and CO2, vegetation productivity is increased under A2 and A1B scenarios and reduced under B1. Results indicate that higher carbon price paths produce higher gains in carbon credits and accelerate the rate at which available land hits maximum capacity thus acting as either an incentive or disincentive for landowners to commit their lands to forestry mitigation projects. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Climate change is one of the most important global environmental challenges, with implications for food production, water supply, health, energy, etc. Addressing climate change requires a good scientific understanding as well as coordinated action at national and global level. This paper addresses these challenges. Historically, the responsibility for greenhouse gas emissions' increase lies largely with the industrialized world, though the developing countries are likely to be the source of an increasing proportion of future emissions. The projected climate change under various scenarios is likely to have implications on food production, water supply, coastal settlements, forest ecosystems, health, energy security, etc. The adaptive capacity of communities likely to be impacted by climate change is low in developing countries. The efforts made by the UNFCCC and the Kyoto Protocol provisions are clearly inadequate to address the climate change challenge. The most effective way to address climate change is to adopt a sustainable development pathway by shifting to environmentally sustainable technologies and promotion of energy efficiency, renewable energy, forest conservation, reforestation, water conservation, etc. The issue of highest importance to developing countries is reducing the vulnerability of their natural and socio-economic systems to the projected climate change. India and other developing countries will face the challenge of promoting mitigation and adaptation strategies, bearing the cost of such an effort, and its implications for economic development.
Resumo:
Semi-rigid molecular tweezers 1, 3 and 4 bind picric acid with more than tenfold increment in tetrachloromethane as compared to chloroform.
Resumo:
Protocols for secure archival storage are becoming increasingly important as the use of digital storage for sensitive documents is gaining wider practice. Wong et al.[8] combined verifiable secret sharing with proactive secret sharing without reconstruction and proposed a verifiable secret redistribution protocol for long term storage. However their protocol requires that each of the receivers is honest during redistribution. We proposed[3] an extension to their protocol wherein we relaxed the requirement that all the recipients should be honest to the condition that only a simple majority amongst the recipients need to be honest during the re(distribution) processes. Further, both of these protocols make use of Feldman's approach for achieving integrity during the (redistribution processes. In this paper, we present a revised version of our earlier protocol, and its adaptation to incorporate Pedersen's approach instead of Feldman's thereby achieving information theoretic secrecy while retaining integrity guarantees.
Resumo:
We describe here a rapid, energy-efficient, green and economically scalable room temperature protocol for the synthesis of silver nanoparticles. Tannic acid, a polyphenolic compound derived from plant extracts is used as the reducing agent. Silver nanoparticles of mean size ranging from 3.3 to 22.1 nm were synthesized at room temperature by the addition of silver nitrate to tannic acid solution maintained at an alkaline pH. The mean size was tuned by varying the molar ratio of tannic acid to silver nitrate. We also present proof of concept results demonstrating its suitability for room temperature continuous flow processing.
Resumo:
In this paper we have proposed and implemented a joint Medium Access Control (MAC) -cum- Routing scheme for environment data gathering sensor networks. The design principle uses node 'battery lifetime' maximization to be traded against a network that is capable of tolerating: A known percentage of combined packet losses due to packet collisions, network synchronization mismatch and channel impairments Significant end-to-end delay of an order of few seconds We have achieved this with a loosely synchronized network of sensor nodes that implement Slotted-Aloha MAC state machine together with route information. The scheme has given encouraging results in terms of energy savings compared to other popular implementations. The overall packet loss is about 12%. The battery life time increase compared to B-MAC varies from a minimum of 30% to about 90% depending on the duty cycle.
Resumo:
A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.
Resumo:
In this two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In Part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover GNAF protocol enables the use of sphere decoders at the destination with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II, several low decoding complexity code constructions are studied and a lower bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is obtained.
Resumo:
In many applications of wireless ad hoc networks, wireless nodes are owned by rational and intelligent users. In this paper, we call nodes selfish if they are owned by independent users and their only objective is to maximize their individual goals. In such situations, it may not be possible to use the existing protocols for wireless ad hoc networks as these protocols assume that nodes follow the prescribed protocol without deviation. Stimulating cooperation among these nodes is an interesting and challenging problem. Providing incentives and pricing the transactions are well known approaches to stimulate cooperation. In this paper, we present a game theoretic framework for truthful broadcast protocol and strategy proof pricing mechanism called Immediate Predecessor Node Pricing Mechanism (IPNPM). The phrase strategy proof here means that truth revelation of cost is a weakly dominant-strategy (in game theoretic terms) for each node. In order to steer our mechanism-design approach towards practical implementation, we compute the payments to nodes using a distributed algorithm. We also propose a new protocol for broadcast in wireless ad hoc network with selfish nodes based on IPNPM. The features of the proposed broadcast protocol are reliability and a significantly reduced number of packet forwards compared to the number of network nodes, which in turn leads to less system-wide power consumption to broadcast a single packet. Our simulation results show the efficacy of the proposed broadcast protocol.
Resumo:
Existing protocols for archival systems make use of verifiability of shares in conjunction with a proactive secret sharing scheme to achieve high availability and long term confidentiality, besides data integrity. In this paper, we extend an existing protocol (Wong et al. [9]) to take care of more realistic situations. For example, it is assumed in the protocol of Wong et al. that the recipients of the secret shares are all trustworthy; we relax this by requiring that only a majority is trustworthy.
Resumo:
We consider the slotted ALOHA protocol on a channel with a capture effect. There are M
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
A novel universal approach to understand the self-deflagration in solids has been attempted by using basic thermodynamic equation of partial differentiation, where burning mte depends on the initial temperature and pressure of the system. Self-deflagrating solids are rare and are reported only in few compounds like ammonium perchlorate (AP), polystyrene peroxide and tetrazole. This approach has led us to understand the unique characteristics of AP, viz. the existence of low pressure deflagration limit (LPL 20 atm), hitherto not understood sufficiently. This analysis infers that the overall surface activation energy comprises of two components governed by the condensed phase and gas phase processes. The most attractive feature of the model is the identification of a new subcritical regime I' below LPL where AP does not burn. The model is aptly supported by the thermochemical computations and temperature-profile analyses of the combustion train. The thermodynamic model is further corroborated from the kinetic analysis of the high pressure (1-30 atm) DTA thermograms which affords distinct empirical decomposition rate laws in regimes I' and 1 (20-60 atm). Using Fourier-Kirchoff one dimensional heat transfer differential equation, the phase transition thickness and the melt-layer thickness have been computed which conform to the experimental data.
Resumo:
We consider the problem of optimally scheduling a processor executing a multilayer protocol in an intelligent Network Interface Controller (NIC). In particular, we assume a typical LAN environment with class 4 transport service, a connectionless network service, and a class 1 link level protocol. We develop a queuing model for the problem. In the most general case this becomes a cyclic queuing network in which some queues have dedicated servers, and the others have a common schedulable server. We use sample path arguments and Markov decision theory to determine optimal service schedules. The optimal throughputs are compared with those obtained with simple policies. The optimal policy yields upto 25% improvement in some cases. In some other cases, the optimal policy does only slightly better than much simpler policies.
Resumo:
In this paper we present a cache coherence protocol for multistage interconnection network (MIN)-based multiprocessors with two distinct private caches: private-blocks caches (PCache) containing blocks private to a process and shared-blocks caches (SCache) containing data accessible by all processes. The architecture is extended by a coherence control bus connecting all shared-block cache controllers. Timing problems due to variable transit delays through the MIN are dealt with by introducing Transient states in the proposed cache coherence protocol. The impact of the coherence protocol on system performance is evaluated through a performance study of three phases. Assuming homogeneity of all nodes, a single-node queuing model (phase 3) is developed to analyze system performance. This model is solved for processor and coherence bus utilizations using the mean value analysis (MVA) technique with shared-blocks steady state probabilities (phase 1) and communication delays (phase 2) as input parameters. The performance of our system is compared to that of a system with an equivalent-sized unified cache and with a multiprocessor implementing a directory-based coherence protocol. System performance measures are verified through simulation.