18 resultados para devising
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper is a condensed version of the final report of a detailed field study of rural energy consumption patterns in six villages located west of Bangalore in the dry belt of Karnataka State in India. The study was carried out in two phases; first, a pilot study of four villages and second, the detailed study of six villages, the populations of which varied from around 350 to about 950. The pilot survey ended in late 1976, and most of the data was collected for the main project in 1977. Processing of the collected data was completed in 1980. The aim was to carry out a census survey, rather than a sample study. Hence, considerable effort was expended in production of both a suitable questionnaire, ensuring that all respondents were contacted, and devising methods which would accurately reflect the actual energy use in various energy-utilising activities. In the end, 560 households out of 578 (97%) were surveyed. The following ranking was found for the various energy sources in order of average percentage contribution to the annual total energy requirement: firewood, 81·6%; human energy, 7·7%; animal energy, 2·7%; kerosene, 2·1%; electricity, 0·6% and all other sources (rice husks, agro-wastes, coal and diesel fuel), 5·3%. In other words commercial fuels made only a small contribution to the overall energy use. It should be noted that dung cakes are not burned in this region. The average energy use pattern, sector by sector, again on a percentage basis, was as follows: domestic, 88·3%; industry, 4·7%; agriculture, 4·3%; lighting, 2·2% and transport, 0·5%. The total annual per capita energy consumption was 12·6 ± 1·2 GJ, giving an average annual household consumption of around 78·6 GJ.
Resumo:
This paper is a condensed version of the final report of a detailed field study of rural energy consumption patterns in six villages located west of Bangalore in the dry belt of Karnataka State in India. The study was carried out in two phases; first, a pilot study of four villages and second, the detailed study of six villages, the populations of which varied from around 350 to about 950. The pilot survey ended in late 1976, and most of the data was collected for the main project in 1977. Processing of the collected data was completed in 1980. The aim was to carry out a census survey, rather than a sample study. Hence, considerable effort was expended in production of both a suitable questionnaire, ensuring that all respondents were contacted, and devising methods which would accurately reflect the actual energy use in various energy-utilising activities. In the end, 560 households out of 578 (97%) were surveyed. The following ranking was found for the various energy sources in order of average percentage contribution to the annual total energy requirement: firewood, 81A·6%; human energy, 7A·7%; animal energy, 2A·7%; kerosene, 2A·1%; electricity, 0A·6% and all other sources (rice husks, agro-wastes, coal and diesel fuel), 5A·3%. In other words commercial fuels made only a small contribution to the overall energy use. It should be noted that dung cakes are not burned in this region. The average energy use pattern, sector by sector, again on a percentage basis, was as follows: domestic, 88A·3%; industry, 4A·7%; agriculture, 4A·3%; lighting, 2A·2% and transport, 0A·5%. The total annual per capita energy consumption was 12A·6 A± 1A·2 GJ, giving an average annual household consumption of around 78A·6 GJ.
Resumo:
We propose a novel technique for robust voiced/unvoiced segment detection in noisy speech, based on local polynomial regression. The local polynomial model is well-suited for voiced segments in speech. The unvoiced segments are noise-like and do not exhibit any smooth structure. This property of smoothness is used for devising a new metric called the variance ratio metric, which, after thresholding, indicates the voiced/unvoiced boundaries with 75% accuracy for 0dB global signal-to-noise ratio (SNR). A novelty of our algorithm is that it processes the signal continuously, sample-by-sample rather than frame-by-frame. Simulation results on TIMIT speech database (downsampled to 8kHz) for various SNRs are presented to illustrate the performance of the new algorithm. Results indicate that the algorithm is robust even in high noise levels.
Resumo:
This article analyzes the effect of devising a new failure envelope by the combination of the most commonly used failure criteria for the composite laminates, on the design of composite structures. The failure criteria considered for the study are maximum stress and Tsai-Wu criteria. In addition to these popular phenomenological-based failure criteria, a micromechanics-based failure criterion called failure mechanism-based failure criterion is also considered. The failure envelopes obtained by these failure criteria are superimposed over one another and a new failure envelope is constructed based on the lowest absolute values of the strengths predicted by these failure criteria. Thus, the new failure envelope so obtained is named as most conservative failure envelope. A minimum weight design of composite laminates is performed using genetic algorithms. In addition to this, the effect of stacking sequence on the minimum weight of the laminate is also studied. Results are compared for the different failure envelopes and the conservative design is evaluated, with respect to the designs obtained by using only one failure criteria. The design approach is recommended for structures where composites are the key load-carrying members such as helicopter rotor blades.
Resumo:
Static characteristics of an analog-to-digital converter (ADC) can be directly determined from the histogram-based quasi-static approach by measuring the ADC output when excited by an ideal ramp/triangular signal of sufficiently low frequency. This approach requires only a fraction of time compared to the conventional dc voltage test, is straightforward, is easy to implement, and, in principle, is an accepted method as per the revised IEEE 1057. However, the only drawback is that ramp signal sources are not ideal. Thus, the nonlinearity present in the ramp signal gets superimposed on the measured ADC characteristics, which renders them, as such, unusable. In recent years, some solutions have been proposed to alleviate this problem by devising means to eliminate the contribution of signal source nonlinearity. Alternatively, a straightforward step would be to get rid of the ramp signal nonlinearity before it is applied to the ADC. Driven by this logic, this paper describes a simple method about using a nonlinear ramp signal, but yet causing little influence on the measured ADC static characteristics. Such a thing is possible because even in a nonideal ramp, there exist regions or segments that are nearly linear. Therefore, the task, essentially, is to identify these near-linear regions in a given source and employ them to test the ADC, with a suitable amplitude to match the ADC full-scale voltage range. Implementation of this method reveals that a significant reduction in the influence of source nonlinearity can be achieved. Simulation and experimental results on 8- and 10-bit ADCs are presented to demonstrate its applicability.
Resumo:
The importance of long-range prediction of rainfall pattern for devising and planning agricultural strategies cannot be overemphasized. However, the prediction of rainfall pattern remains a difficult problem and the desired level of accuracy has not been reached. The conventional methods for prediction of rainfall use either dynamical or statistical modelling. In this article we report the results of a new modelling technique using artificial neural networks. Artificial neural networks are especially useful where the dynamical processes and their interrelations for a given phenomenon are not known with sufficient accuracy. Since conventional neural networks were found to be unsuitable for simulating and predicting rainfall patterns, a generalized structure of a neural network was then explored and found to provide consistent prediction (hindcast) of all-India annual mean rainfall with good accuracy. Performance and consistency of this network are evaluated and compared with those of other (conventional) neural networks. It is shown that the generalized network can make consistently good prediction of annual mean rainfall. Immediate application and potential of such a prediction system are discussed.
Resumo:
In this investigation, the influence of microstructure on the high temperature creep behaviour of Ti-24Al-11Nb alloy has been studied. Different microstructures are produced by devising suitable heat treatments from the beta phase field. Creep tests are conducted in the temperature range of 923-1113 K, over a wide stress range at each temperature, employing the impression creep technique. The creep behaviour is found tb be sensitive to the crystallographic texture as well as to the details of microstructure. Best creep resistance is shown when the microstructure contains smaller alpha(2) plates and a lower beta volume fraction. This can be understood in terms of the dislocation barriers offered by alpha(2) beta boundaries and the case of plastic flow in the beta phase at high temperatures.
Resumo:
Airlines have successfully practiced revenue management over the past four decades and enhanced their revenue. Most of the traditional models that are applied assume that customers buying a high-fare class ticket will not purchase a low-fare class ticket even if it is available. This is not a very realistic assumption and has led to revenue leakage due to customers exhibiting buy-down behaviour. This paper aims at devising a suitable incentive mechanism that would incite the customer to reveal his nature. This helps in reducing revenue leakage. We show that the proposed incentive mechanism is profitable to both the buyer and seller and hence ensures the buyers participation in the mechanism. Journal of the Operational Research Society (2011) 62, 1566-1573. doi:10.1057/jors.2010.57 Published online 11 August 2010
Resumo:
We review the current status of various aspects of biopolymer translocation through nanopores and the challenges and opportunities it offers. Much of the interest generated by nanopores arises from their potential application to third-generation cheap and fast genome sequencing. Although the ultimate goal of single-nucleotide identification has not yet been reached, great advances have been made both from a fundamental and an applied point of view, particularly in controlling the translocation time, fabricating various kinds of synthetic pores or genetically engineering protein nanopores with tailored properties, and in devising methods (used separately or in combination) aimed at discriminating nucleotides based either on ionic or transverse electron currents, optical readout signatures, or on the capabilities of the cellular machinery. Recently, exciting new applications have emerged, for the detection of specific proteins and toxins (stochastic biosensors), and for the study of protein folding pathways and binding constants of protein-protein and protein-DNA complexes. The combined use of nanopores and advanced micromanipulation techniques involving optical/magnetic tweezers with high spatial resolution offers unique opportunities for improving the basic understanding of the physical behavior of biomolecules in confined geometries, with implications for the control of crucial biological processes such as protein import and protein denaturation. We highlight the key works in these areas along with future prospects. Finally, we review theoretical and simulation studies aimed at improving fundamental understanding of the complex microscopic mechanisms involved in the translocation process. Such understanding is a pre-requisite to fruitful application of nanopore technology in high-throughput devices for molecular biomedical diagnostics.
Resumo:
Supramolecular chemistry is an emerging tool for devising materials that can perform specified functions. The self-assembly of facially amphiphilic bile acid molecules has been extensively utilized for the development of functional soft materials. Supramolecular hydrogels derived from the bile acid backbone act as useful templates for the intercalation of multiple components. Based on this, synthesis of gel-nanoparticle hybrid materials, photoluminescent coating materials, development of a new enzyme assay technique, etc. were achieved in the author's laboratory. The present account highlights some of these achievements.
Resumo:
We consider the problem of devising incentive strategies for viral marketing of a product. In particular, we assume that the seller can influence penetration of the product by offering two incentive programs: a) direct incentives to potential buyers (influence) and b) referral rewards for customers who influence potential buyers to make the purchase (exploit connections). The problem is to determine the optimal timing of these programs over a finite time horizon. In contrast to algorithmic perspective popular in the literature, we take a mean-field approach and formulate the problem as a continuous-time deterministic optimal control problem. We show that the optimal strategy for the seller has a simple structure and can take both forms, namely, influence-and-exploit and exploit-and-influence. We also show that in some cases it may optimal for the seller to deploy incentive programs mostly for low degree nodes. We support our theoretical results through numerical studies and provide practical insights by analyzing various scenarios.
Resumo:
In this paper, we analyse three commonly discussed `flaws' of linearized elasticity theory and attempt to resolve them. The first `flaw' concerns cylindrically orthotropic material models. Since the work of Lekhnitskii (1968), there has been a growing body of work that continues to this day, that shows that infinite stresses arise with the use of a cylindrically orthotropic material model even in the case of linearized elasticity. Besides infinite stresses, interpenetration of matter is also shown to occur. These infinite stresses and interpenetration occur when the ratio of the circumferential Young modulus to the radial Young modulus is less than one. If the ratio is greater than one, then the stresses at the center of a spinning disk are found to be zero (recall that for an isotropic material model, the stresses are maximum at the center). Thus, the stresses go abruptly from a maximum value to a value of zero as the ratio is increased to a value even slightly above one! One of the explanations provided for this extremely anomalous behaviour is the failure of linearized elasticity to satisfy material frame-indifference. However, if this is the true cause, then the anomalous behaviour should also occur with the use of an isotropic material model, where, no such anomalies are observed. We show that the real cause of the problem is elsewhere and also show how these anomalies can be resolved. We also discuss how the formulation of linearized elastodynamics in the case of small deformations superposed on a rigid motion can be given in a succinct manner. Finally, we show how the long-standing problem of devising three compatibility relations instead of six can be resolved.
Resumo:
A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data.
Resumo:
Classification of pharmacologic activity of a chemical compound is an essential step in any drug discovery process. We develop two new atom-centered fragment descriptors (vertex indices) - one based solely on topological considerations without discriminating atomor bond types, and another based on topological and electronic features. We also assess their usefulness by devising a method to rank and classify molecules with regard to their antibacterial activity. Classification performances of our method are found to be superior compared to two previous studies on large heterogeneous data sets for hit finding and hit-to-lead studies even though we use much fewer parameters. It is found that for hit finding studies topological features (simple graph) alone provide significant discriminating power, and for hit-to-lead process small but consistent improvement can be made by additionally including electronic features (colored graph). Our approach is simple, interpretable, and suitable for design of molecules as we do not use any physicochemical properties. The singular use of vertex index as descriptor, novel range based feature extraction, and rigorous statistical validation are the key elements of this study.
Resumo:
Since its induction, the selective-identity (sID) model for identity-based cryptosystems and its relationship with various other notions of security has been extensively studied. As a result, it is a general consensus that the sID model is much weaker than the full-identity (ID) model. In this paper, we study the sID model for the particular case of identity-based signatures (IBS). The main focus is on the problem of constructing an ID-secure IBS given an sID-secure IBS without using random oracles-the so-called standard model-and with reasonable security degradation. We accomplish this by devising a generic construction which uses as black-box: i) a chameleon hash function and ii) a weakly-secure public-key signature. We argue that the resulting IBS is ID-secure but with a tightness gap of O(q(s)), where q(s) is the upper bound on the number of signature queries that the adversary is allowed to make. To the best of our knowledge, this is the first attempt at such a generic construction.