996 resultados para adaptation plan
Resumo:
Climate change is projected to lead to shift of forest types leading to irreversible damage to forests by rendering several species extinct and potentially affecting the livelihoods of local communities and the economy. Approximately 47% and 42% of tropical dry deciduous grids are projected to undergo shifts under A2 and B2 SRES scenarios respectively, as opposed to less than 16% grids comprising of tropical wet evergreen forests. Similarly, the tropical thorny scrub forest is projected to undergo shifts in majority of forested grids under A2 (more than 80%) as well as B2 scenarios (50% of grids). Thus the forest managers and policymakers need to adapt to the ecological as well as the socio-economic impacts of climate change. This requires formulation of effective forest management policies and practices, incorporating climate concerns into long-term forest policy and management plans. India has formulated a large number of innovative and progressive forest policies but a mechanism to ensure effective implementation of these policies is needed. Additional policies and practices may be needed to address the impacts of climate change. This paper discusses an approach and steps involved in the development of an adaptation framework as well as policies, strategies and practices needed for mainstreaming adaptation to cope with projected climate change. Further, the existing barriers which may affect proactive adaptation planning given the scale, accuracy and uncertainty associated with assessing climate change impacts are presented.
Resumo:
Due to large scale afforestation programs and forest conservation legislations, India's total forest area seems to have stabilized or even increased. In spite of such efforts, forest fragmentation and degradation continues, with forests being subject to increased pressure due to anthropogenic factors. Such fragmentation and degradation is leading to the forest cover to change from very dense to moderately dense and open forest and 253 km(2) of very dense forest has been converted to moderately dense forest, open forest, scrub and non-forest (during 2005-2007). Similarly, there has been a degradation of 4,120 km(2) of moderately dense forest to open forest, scrub and non-forest resulting in a net loss of 936 km(2) of moderately dense forest. Additionally, 4,335 km(2) of open forest have degraded to scrub and non-forest. Coupled with pressure due to anthropogenic factors, climate change is likely to be an added stress on forests. Forest sector programs and policies are major factors that determine the status of forests and potentially resilience to projected impacts of climate change. An attempt is made to review the forest policies and programs and their implications for the status of forests and for vulnerability of forests to projected climate change. The study concludes that forest conservation and development policies and programs need to be oriented to incorporate climate change impacts, vulnerability and adaptation.
Resumo:
We examine the potential for adaptation to climate change in Indian forests, and derive the macroeconomic implications of forest impacts and adaptation in India. The study is conducted by integrating results from the dynamic global vegetation model IBIS and the computable general equilibrium model GRACE-IN, which estimates macroeconomic implications for six zones of India. By comparing a reference scenario without climate change with a climate impact scenario based on the IPCC A2-scenario, we find major variations in the pattern of change across zones. Biomass stock increases in all zones but the Central zone. The increase in biomass growth is smaller, and declines in one more zone, South zone, despite higher stock. In the four zones with increases in biomass growth, harvest increases by only approximately 1/3 of the change in biomass growth. This is due to two market effects of increased biomass growth. One is that an increase in biomass growth encourages more harvest given other things being equal. The other is that more harvest leads to higher supply of timber, which lowers market prices. As a result, also the rent on forested land decreases. The lower prices and rent discourage more harvest even though they may induce higher demand, which increases the pressure on harvest. In a less perfect world than the model describes these two effects may contribute to an increase in the risk of deforestation because of higher biomass growth. Furthermore, higher harvest demands more labor and capital input in the forestry sector. Given total supply of labor and capital, this increases the cost of production in all the other sectors, although very little indeed. Forestry dependent communities with declining biomass growth may, however, experience local unemployment as a result.
Resumo:
Symmetry?adapted linear combinations of valence?bond (VB) diagrams are constructed for arbitrary point groups and total spin S using diagrammatic VB methods. VB diagrams are related uniquely to invariant subspaces whose size reflects the number of group elements; their nonorthogonality leads to sparser matrices and is fully incorporated into a binary integer representation. Symmetry?adapated linear combinations of VB diagrams are constructed for the 1764 singlets of a half?filled cube of eight sites, the 2.8 million ??electron singlets of anthracene, and for illustrative S?0 systems.
Resumo:
The use of delayed coefficient adaptation in the least mean square (LMS) algorithm has enabled the design of pipelined architectures for real-time transversal adaptive filtering. However, the convergence speed of this delayed LMS (DLMS) algorithm, when compared with that of the standard LMS algorithm, is degraded and worsens with increase in the adaptation delay. Existing pipelined DLMS architectures have large adaptation delay and hence degraded convergence speed. We in this paper, first present a pipelined DLMS architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph representation of the DLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates, limited only by the delay of a full adder and a 2-to-1 multiplexer. In the second part of this paper, we extend the synthesis methodology described in the first part, to synthesize pipelined DLMS architectures whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the DLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. (C) 1999 Elsevier Science B.V. All rights resented.
Resumo:
A heterotroph Paenibacillus polymyxa bacteria is adapted to pyrite, chalcopyrite, galena and sphalerite minerals by repeated subculturing the bacteria in the presence of the mineral until their growth characteristics became similar to the growth in the absence of mineral. The unadapted and adapted bacterial surface have been chemically characterised by zeta-potential, contact angle, adherence to hydrocarbons and FT-IR spectroscopic studies. The surface free energies of bacteria have been calculated by following the equation of state and surface tension component approaches. The aim of the present paper is to understand the changes in surface chemical properties of bacteria during adaptation to sulfide minerals and the projected consequences in bioflotation and bioflocculation processes. The mineral-adapted cells became more hydrophilic as compared to unadapted cells. There are no significant changes in the surface charge of bacteria before and after adaptation, and all the bacteria exhibit an iso-electric point below pH 2.5. The contact angles are observed to be more reliable for hydrophobicity assessment than the adherence to hydrocarbons. The Lifschitz–van der Waals/acid–base approach to calculate surface free energy is found to be relevant for mineral–bacteria interactions. The diffuse reflectance FT-IR absorbance bands for all the bacteria are the same illustrating similar surface chemical composition. However, the intensity of the bands for unadapted and adapted cells is significantly varied and this is due to different amounts of bacterial secretions underlying different growth conditions.
Resumo:
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.
Resumo:
Energy Harvesting (EH) nodes, which harvest energy from the environment in order to communicate over a wireless link, promise perpetual operation of a wireless network with battery-powered nodes. In this paper, we address the throughput optimization problem for a rate-adaptive EH node that chooses its rate from a set of discrete rates and adjusts its power depending on its channel gain and battery state. First, we show that the optimal throughput of an EH node is upper bounded by the throughput achievable by a node that is subject only to an average power constraint. We then propose a simple transmission scheme for an EH node that achieves an average throughput close to the upper bound. The scheme's parameters can be made to account for energy overheads such as battery non-idealities and the energy required for sensing and processing. The effect of these overheads on the average throughput is also analytically characterized.
Resumo:
A grid adaptation strategy for unstructured data based codes, employing a combination of hexahedral and prismatic elements, generalizable to tetrahedral and pyramidal elements has been developed.
Resumo:
The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.
Resumo:
In species-rich assemblages, differential utilization of vertical space can be driven by resource availability. For animals that communicate acoustically over long distances under habitat-induced constraints, access to an effective transmission channel is a valuable resource. The acoustic adaptation hypothesis suggests that habitat acoustics imposes a selective pressure that drives the evolution of both signal structure and choice of calling sites by signalers. This predicts that species-specific signals transmit best in native habitats. In this study, we have tested the hypothesis that vertical stratification of calling heights of acoustically communicating species is driven by acoustic adaptation. This was tested in an assemblage of 12 coexisting species of crickets and katydids in a tropical wet evergreen forest. We carried out transmission experiments using natural calls at different heights from the forest floor to the canopy. We measured signal degradation using 3 different measures: total attenuation, signal-to-noise ratio (SNR), and envelope distortion. Different sets of species supported the hypothesis depending on which attribute of signal degradation was examined. The hypothesis was upheld by 5 species for attenuation and by 3 species each for SNR and envelope distortion. Only 1 species of 12 provided support for the hypothesis by all 3 measures of signal degradation. The results thus provided no overall support for acoustic adaptation as a driver of vertical stratification of coexisting cricket and katydid species.
Resumo:
A scheme for stabilizing stochastic approximation iterates by adaptively scaling the step sizes is proposed and analyzed. This scheme leads to the same limiting differential equation as the original scheme and therefore has the same limiting behavior, while avoiding the difficulties associated with projection schemes. The proof technique requires only that the limiting o.d.e. descend a certain Lyapunov function outside an arbitrarily large bounded set. (C) 2012 Elsevier B.V. All rights reserved.