77 resultados para least-cost diet

em Indian Institute of Science - Bangalore - Índia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a need to understand the carbon (C) sequestration potential of the forestry option and its financial implications for each country.In India the C emissions from deforestation are estimated to be nearly offset by C sequestration in forests under succession and tree plantations. India has nearly succeeded in stabilizing the area under forests and has adequate forest conservation strategies. Biomass demands for softwood, hardwood and firewood are estimated to double or treble by the year 2020. A set of forestry options were developed to meet the projected biomass needs, and keeping in mind the features of land categories available, three scenarios were developed: potential; demand-driven; and programme-driven scenarios. Adoption of the demand-driven scenario, targeted at meeting the projected biomass needs, is estimated to sequester 78 Mt of C annually after accounting for all emissions resulting from clearfelling and end use of biomass. The demand-driven scenario is estimated to offset 50% of national C emission at 1990 level. The cost per t of C sequestered for forestry options is lower than the energy options considered. The annual investment required for implementing the demand-driven scenario is estimated to be US$ 2.1 billion for six years and is shown to be feasible. Among forestry options, the ranking based on investment cost per t of C sequestered from least cost to highest cost is; natural regeneration-agro-forestry-enhanced natural regeneration (< US$ 2.5/t C)-timber-community-softwood forestry (US$ 3.3 to 7.3 per t of C).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Present work shows the feasibility of decentralized energy options for the Tumkur district in India. Decentralized energy planning (DEP) involves scaling down energy planning to subnational or regional scales. The important aspect of the energy planning at decentralized level would be to prepare an area-based DEP to meet energy needs and development of alternate energy sources at least-cost to the economy and environment. The geographical coverage and scale reflects the level at which the analysis takes place, which is an important factor in determining the structure of models. In the present work, DEP modeling under different scenarios has been carried out for Tumkur district of India for the year 2020. DEP model is suitably scaled for obtaining the optimal mix of energy resources and technologies using a computer-based goal programming technique. The rural areas of the Tumkur district have different energy needs. Results show that electricity needs can be met by biomass gasifier technology, using biomass feedstock produced by allocating only 12% of the wasteland in the district at 8 t/ha/yr of biomass productivity. Surplus electricity can be produced by adopting the option of biomass power generation from energy plantations. The surplus electricity generated can be supplied to the grid. The sustainable development scenario is a least cost scenario apart from promoting self-reliance, local employment, and environmental benefits. (C) 2010 American Institute of Chemical Engineers Environ Prog, 30: 248-258, 2011

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and ocean for different rates of mitigation. Mitigation rates for BC and CO2 are characterized by respective timescales for e-folding reduction in emissions intensity of gross global product. There are respective emissions models that force the box model. Lastly there is a simple economics model, with cost of mitigation varying inversely with emission intensity. Constant mitigation timescale corresponds to mitigation at a constant annual rate, for example an e-folding timescale of 40 years corresponds to 2.5% reduction each year. Discounted present cost depends only on respective mitigation timescale and respective mitigation cost at present levels of emission intensity. Least-cost mitigation is posed as choosing respective e-folding timescales, to minimize total mitigation cost under a temperature constraint (e.g. within 2 degrees C above preindustrial). Peak warming is more sensitive to mitigation timescale for CO2 than for BC. Therefore rapid mitigation of CO2 emission intensity is essential to limiting peak warming, but simultaneous mitigation of BC can reduce total mitigation expenditure. (c) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background & objectives: There is a need to develop an affordable and reliable tool for hearing screening of neonates in resource constrained, medically underserved areas of developing nations. This study valuates a strategy of health worker based screening of neonates using a low cost mechanical calibrated noisemaker followed up with parental monitoring of age appropriate auditory milestones for detecting severe-profound hearing impairment in infants by 6 months of age. Methods: A trained health worker under the supervision of a qualified audiologist screened 425 neonates of whom 20 had confirmed severe-profound hearing impairment. Mechanical calibrated noisemakers of 50, 60, 70 and 80 dB (A) were used to elicit the behavioural responses. The parents of screened neonates were instructed to monitor the normal language and auditory milestones till 6 months of age. This strategy was validated against the reference standard consisting of a battery of tests - namely, auditory brain stem response (ABR), otoacoustic emissions (OAE) and behavioural assessment at 2 years of age. Bayesian prevalence weighted measures of screening were calculated. Results: The sensitivity and specificity was high with least false positive referrals for. 70 and 80 dB (A) noisemakers. All the noisemakers had 100 per cent negative predictive value. 70 and 80 dB (A) noisemakers had high positive likelihood ratios of 19 and 34, respectively. The probability differences for pre- and post- test positive was 43 and 58 for 70 and 80 dB (A) noisemakers, respectively. Interpretation & conclusions: In a controlled setting, health workers with primary education can be trained to use a mechanical calibrated noisemaker made of locally available material to reliably screen for severe-profound hearing loss in neonates. The monitoring of auditory responses could be done by informed parents. Multi-centre field trials of this strategy need to be carried out to examine the feasibility of community health care workers using it in resource constrained settings of developing nations to implement an effective national neonatal hearing screening programme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With ever increasing demand for electric energy, additional generation and associated transmission facilities has to be planned and executed. In order to augment existing transmission facilities, proper planning and selective decisions are to be made whereas keeping in mind the interests of several parties who are directly or indirectly involved. Common trend is to plan optimal generation expansion over the planning period in order to meet the projected demand with minimum cost capacity addition along with a pre-specified reliability margin. Generation expansion at certain locations need new transmission network which involves serious problems such as getting right of way, environmental clearance etc. In this study, an approach to the citing of additional generation facilities in a given system with minimum or no expansion in the transmission facility is attempted using the network connectivity and the concept of electrical distance for projected load demand. The proposed approach is suitable for large interconnected systems with multiple utilities. Sample illustration on real life system is presented in order to show how this approach improves the overall performance on the operation of the system with specified performance parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A modified least mean fourth (LMF) adaptive algorithm applicable to non-stationary signals is presented. The performance of the proposed algorithm is studied by simulation for non-stationarities in bandwidth, centre frequency and gain of a stochastic signal. These non-stationarities are in the form of linear, sinusoidal and jump variations of the parameters. The proposed LMF adaptation is found to have better parameter tracking capability than the LMS adaptation for the same speed of convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demagnetization to zero remanent value or to a predetermined value is of interest to magnet manufacturers and material users. Conventional methods of demagnetization using a varying alternating demagnetizing field, under a damped oscillatory or conveyor system, result in either high cost for demagnetization or large power dissipation. A simple technique using thyristors is presented for demagnetizing the material. Power consumption is mainly in the first two half-cycles of applied voltage. Hence power dissipation is very much reduced. An optimum value calculation for a thyristor triggering angle for demagnetizing high coercive materials is also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the applicability of the least mean fourth (LM F) power gradient adaptation criteria with 'advantage' for signals associated with gaussian noise, the associated noise power estimate not being known. The proposed method, as an adaptive spectral estimator, is found to provide superior performance than the least mean square (LMS) adaptation for the same (or even lower) speed of convergence for signals having sufficiently high signal-to-gaussian noise ratio. The results include comparison of the performance of the LMS-tapped delay line, LMF-tapped delay line, LMS-lattice and LMF-lattice algorithms, with the Burg's block data method as reference. The signals, like sinusoids with noise and stochastic signals like EEG, are considered in this study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the critical issues in large scale commercial exploitation of MEMS technology is its system integration. In MEMS, a system design approach requires integration of varied and disparate subsystems with one of a kind interface. The physical scales as well as the magnitude of signals of various subsystems vary widely. Known and proven integration techniques often lead to considerable loss in advantages the tiny MEMS sensors have to offer. Therefore, it becomes imperative to think of the entire system at the outset, at least in terms of the concept design. Such design entails various aspects of the system ranging from selection of material, transduction mechanism, structural configuration, interface electronics, and packaging. One way of handling this problem is the system-in-package approach that uses optimized technology for each function using the concurrent hybrid engineering approach. The main strength of this design approach is the fast time to prototype development. In the present work, we pursue this approach for a MEMS load cell to complete the process of system integration for high capacity load sensing. The system includes; a micromachined sensing gauge, interface electronics and a packaging module representing a system-in-package ready for end characterization. The various subsystems are presented in a modular stacked form using hybrid technologies. The micromachined sensing subsystem works on principles of piezo-resistive sensing and is fabricated using CMOS compatible processes. The structural configuration of the sensing layer is designed to reduce the offset, temperature drift, and residual stress effects of the piezo-resistive sensor. ANSYS simulations are carried out to study the effect of substrate coupling on sensor structure and its sensitivity. The load cell system has built-in electronics for signal conditioning, processing, and communication, taking into consideration the issues associated with resolution of minimum detectable signal. The packaged system represents a compact and low cost solution for high capacity load sensing in the category of compressive type load sensor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examined whether C-terminal residues of soluble recombinant FtsZ of Mycobacterium tuberculosis (MtFtsZ) have any role in MtFtsZ polymerization in vitro. MtFtsZ-delta C1, which lacks C-terminal extreme Arg residue (underlined in the C-terminal extreme stretch of 13 residues, DDDDVDVPPFMRR), but retaining the penultimate Arg residue (DDDDVDVPPFMR), polymerizes like full-length MtFtsZ in vitro. However, MtFtsZ-delta C2 that lacks both the Arg residues at the C-terminus (DDDDVDVPPFM), neither polymerizes at pH 6.5 nor forms even single- or double-stranded filaments at pH 7.7 in the presence of 10 mM CaCl2. Neither replacement of the penultimate Arg residue, in the C-terminal Arg deletion mutant DDDDVDVPPFMR, with Lys or His or Ala or Asp (DDDDVDVPPFMK/H/A/D) enabled polymerization. Although MtFtsZ-delta C2 showed secondary and tertiary structural changes, which might have affected polymerization, GTPase activity of MtFtsZ-delta C2 was comparable to that of MtFtsZ. These data suggest that MtFtsZ requires an Arg residue as the extreme C-terminal residue for polymerization in vitro. The polypeptide segment containing C-terminal 67 residues, whose coordinates were absent from MtFtsZ crystal structure, was modeled on tubulin and MtFtsZ dimers. Possibilities for the influence of the C-terminal Arg residues on the stability of the dimer and thereby on MtFtsZ polymerization have been discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At the beginning of 2008, I visited a watershed, located in Karkinatam village in the state of Karnataka, South India, where crops are intensively irrigated using groundwater. The water table had been depleted from a depth of 5 to 50 m in a large part of the area. Presently, 42% of a total of 158 water wells in the watershed are dry. Speaking with the farmers, I have been amazed to learn that they were drilling down to 500 m to tap water. This case is, of course, not isolated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A business cluster is a co-located group of micro, small, medium scale enterprises. Such firms can benefit significantly from their co-location through shared infrastructure and shared services. Cost sharing becomes an important issue in such sharing arrangements especially when the firms exhibit strategic behavior. There are many cost sharing methods and mechanisms proposed in the literature based on game theoretic foundations. These mechanisms satisfy a variety of efficiency and fairness properties such as allocative efficiency, budget balance, individual rationality, consumer sovereignty, strategyproofness, and group strategyproofness. In this paper, we motivate the problem of cost sharing in a business cluster with strategic firms and illustrate different cost sharing mechanisms through the example of a cluster of firms sharing a logistics service. Next we look into the problem of a business cluster sharing ICT (information and communication technologies) infrastructure and explore the use of cost sharing mechanisms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and is denoted by a'(G). It was conjectured by Alon, Suclakov and Zaks (and earlier by Fiamcik) that a'(G) <= Delta+2, where Delta = Delta(G) denotes the maximum degree of the graph. Alon et al. also raised the question whether the complete graphs of even order are the only regular graphs which require Delta+2 colors to be acyclically edge colored. In this article, using a simple counting argument we observe not only that this is not true, but in fact all d-regular graphs with 2n vertices and d>n, requires at least d+2 colors. We also show that a'(K-n,K-n) >= n+2, when n is odd using a more non-trivial argument. (Here K-n,K-n denotes the complete bipartite graph with n vertices on each side.) This lower bound for Kn,n can be shown to be tight for some families of complete bipartite graphs and for small values of n. We also infer that for every d, n such that d >= 5, n >= 2d+3 and dn even, there exist d-regular graphs which require at least d+2-colors to be acyclically edge colored. (C) 2009 Wiley Periodicals, Inc. J Graph Theory 63: 226-230, 2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.