14 resultados para Visitor expenditure
em Indian Institute of Science - Bangalore - Índia
Resumo:
In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.
Resumo:
Before installation, a voltage source converter is usually subjected to heat-run test to verify its thermal design and performance under load. For heat-run test, the converter needs to be operated at rated voltage and rated current for a substantial length of time. Hence, such tests consume huge amount of energy in case of high-power converters. Also, the capacities of the source and loads available in the research and development (R&D) centre or the production facility could be inadequate to conduct such tests. This paper proposes a method to conduct heat-run tests on high-power, pulse width modulated (PWM) converters with low energy consumption. The experimental set-up consists of the converter under test and another converter (of similar or higher rating), both connected in parallel on the ac side and open on the dc side. Vector-control or synchronous reference frame control is employed to control the converters such that one draws certain amount of reactive power and the other supplies the same; only the system losses are drawn from the mains. The performance of the controller is validated through simulation and experiments. Experimental results, pertaining to heat-run tests on a high-power PWM converter, are presented at power levels of 25 kVA to 150 kVA.
Resumo:
Background: The Mycobacterium leprae genome has less than 50% coding capacity and 1,133 pseudogenes. Preliminary evidence suggests that some pseudogenes are expressed. Therefore, defining pseudogene transcriptional and translational potentials of this genome should increase our understanding of their impact on M. leprae physiology. Results: Gene expression analysis identified transcripts from 49% of all M. leprae genes including 57% of all ORFs and 43% of all pseudogenes in the genome. Transcribed pseudogenes were randomly distributed throughout the chromosome. Factors resulting in pseudogene transcription included: 1) co-orientation of transcribed pseudogenes with transcribed ORFs within or exclusive of operon-like structures; 2) the paucity of intrinsic stem-loop transcriptional terminators between transcribed ORFs and downstream pseudogenes; and 3) predicted pseudogene promoters. Mechanisms for translational ``silencing'' of pseudogene transcripts included the lack of both translational start codons and strong Shine-Dalgarno (SD) sequences. Transcribed pseudogenes also contained multiple ``in-frame'' stop codons and high Ka/Ks ratios, compared to that of homologs in M. tuberculosis and ORFs in M. leprae. A pseudogene transcript containing an active promoter, strong SD site, a start codon, but containing two in frame stop codons yielded a protein product when expressed in E. coli. Conclusion: Approximately half of M. leprae's transcriptome consists of inactive gene products consuming energy and resources without potential benefit to M. leprae. Presently it is unclear what additional detrimental affect(s) this large number of inactive mRNAs has on the functional capability of this organism. Translation of these pseudogenes may play an important role in overall energy consumption and resultant pathophysiological characteristics of M. leprae. However, this study also demonstrated that multiple translational ``silencing'' mechanisms are present, reducing additional energy and resource expenditure required for protein production from the vast majority of these transcripts.
Resumo:
Rammed earth walls are low carbon emission and energy efficient alternatives to load bearing walls. Large numbers of rammed earth buildings have been constructed in the recent past across the globe. This paper is focused on embodied energy in cement stabilised rammed earth (CSRE) walls. Influence of soil grading, density and cement content on compaction energy input has been monitored. A comparison between energy content of cement and energy in transportation of materials, with that of the actual energy input during rammed earth compaction in the actual field conditions and the laboratory has been made. Major conclusions of the investigations are (a) compaction energy increases with increase in clay fraction of the soil mix and it is sensitive to density of the CSRE wall, (b) compaction energy varies between 0.033 MJ/m(3) and 0.36 MJ/m(3) for the range of densities and cement contents attempted, (c) energy expenditure in the compaction process is negligible when compared to energy content of the cement and (d) total embodied energy in CSRE walls increases linearly with the increase in cement content and is in the range of 0.4-0.5 GJ/m(3) for cement content in the rage of 6-8%. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Background: The gene encoding for uncoupling protein-1 (UCP1) is considered to be a candidate gene for type 2 diabetes because of its role in thermogenesis and energy expenditure. The objective of the study was to examine whether genetic variations in the UCP1 gene are associated with type 2 diabetes and its related traits in Asian Indians. Methods: The study subjects, 810 type 2 diabetic subjects and 990 normal glucose tolerant (NGT) subjects, were chosen from the Chennai Urban Rural Epidemiological Study (CURES), an ongoing population-based study in southern India. The polymorphisms were genotyped using the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Linkage disequilibrium (LD) was estimated from the estimates of haplotypic frequencies. Results: The three polymorphisms, namely -3826A -> G, an A -> C transition in the 5'-untranslated region (UTR) and Met229Leu, were not associated with type 2 diabetes. However, the frequency of the A-C-Met (-3826A -> G-5'UTR A -> C-Met229Leu) haplotype was significantly higher among the type 2 diabetic subjects (2.67%) compared with the NGT subjects (1.45%, P < 0.01). The odds ratio for type 2 diabetes for the individuals carrying the haplotype A-C-Met was 1.82 (95% confidence interval, 1.29-2.78, P = 0.009). Conclusions: The haplotype, A-C-Met, in the UCP1 gene is significantly associated with the increased genetic risk for developing type 2 diabetes in Asian Indians.
Resumo:
We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices, we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: (1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, (2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which (3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time), based on the sensor observations obtained until time slot k. Our results show that an optimum closed loop control on Mk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).
Resumo:
We develop analytical models for estimating the energy spent by stations (STAs) in infrastructure WLANs when performing TCP controlled file downloads. We focus on the energy spent in radio communication when the STAs are in the Continuously Active Mode (CAM), or in the static Power Save Mode (PSM). Our approach is to develop accurate models for obtaining the fraction of times the STA radios spend in idling, receiving and transmitting. We discuss two traffic models for each mode of operation: (i) each STA performs one large file download, and (ii) the STAs perform short file transfers. We evaluate the rate of STA energy expenditure with long file downloads, and show that static PSM is worse than just using CAM. For short file downloads we compute the number of file downloads that can be completed with given battery capacity, and show that PSM performs better than CAM for this case. We provide a validation of our analytical models using the NS-2 simulator.
Resumo:
We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices,we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: 1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, 2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which 3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time),based on the sensor observations obtained until time slot k.Our results show that an optimum closed loop control onMk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.
Resumo:
Empirical research available on technology transfer initiatives is either North American or European. Literature over the last two decades shows various research objectives such as identifying the variables to be measured and statistical methods to be used in the context of studying university based technology transfer initiatives. AUTM survey data from years 1996 to 2008 provides insightful patterns about the North American technology transfer initiatives, we use this data in our paper. This paper has three sections namely, a comparison of North American Universities with (n=1129) and without Medical Schools (n=786), an analysis of the top 75th percentile of these samples and a DEA analysis of these samples. We use 20 variables. Researchers have attempted to classify university based technology transfer initiative variables into multi-stages, namely, disclosures, patents and license agreements. Using the same approach, however with minor variations, three stages are defined in this paper. The first stage is to do with inputs from R&D expenditure and outputs namely, invention disclosures. The second stage is to do with invention disclosures being the input and patents issued being the output. The third stage is to do with patents issued as an input and technology transfers as outcomes.
Resumo:
Growing demand for urban built spaces has resulted in unprecedented exponential rise in production and consumption of building materials in construction. Production of materials requires significant energy and contributes to pollution and green house gas (GHG) emissions. Efforts aimed at reducing energy consumption and pollution involved with the production of materials fundamentally requires their quantification. Embodied energy (EE) of building materials comprises the total energy expenditure involved in the material production including all upstream processes such as raw material extraction and transportation. The current paper deals with EE of a few common building materials consumed in bulk in Indian construction industry. These values have been assessed based on actual industrial survey data. Current studies on EE of building materials lack agreement primarily with regard to method of assessment and energy supply assumptions (whether expressed in terms of end use energy or primary energy). The current paper examines the suitability of two basic methods; process analysis and input-output method and identifies process analysis as appropriate for EE assessment in the Indian context. A comparison of EE values of building materials in terms of the two energy supply assumptions has also been carried out to investigate the associated discrepancy. The results revealed significant difference in EE of materials whose production involves significant electrical energy expenditure relative to thermal energy use. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and ocean for different rates of mitigation. Mitigation rates for BC and CO2 are characterized by respective timescales for e-folding reduction in emissions intensity of gross global product. There are respective emissions models that force the box model. Lastly there is a simple economics model, with cost of mitigation varying inversely with emission intensity. Constant mitigation timescale corresponds to mitigation at a constant annual rate, for example an e-folding timescale of 40 years corresponds to 2.5% reduction each year. Discounted present cost depends only on respective mitigation timescale and respective mitigation cost at present levels of emission intensity. Least-cost mitigation is posed as choosing respective e-folding timescales, to minimize total mitigation cost under a temperature constraint (e.g. within 2 degrees C above preindustrial). Peak warming is more sensitive to mitigation timescale for CO2 than for BC. Therefore rapid mitigation of CO2 emission intensity is essential to limiting peak warming, but simultaneous mitigation of BC can reduce total mitigation expenditure. (c) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this paper the soft lunar landing with minimum fuel expenditure is formulated as a nonlinear optimal guidance problem. The realization of pinpoint soft landing with terminal velocity and position constraints is achieved using Model Predictive Static Programming (MPSP). The high accuracy of the terminal conditions is ensured as the formulation of the MPSP inherently poses final conditions as a set of hard constraints. The computational efficiency and fast convergence make the MPSP preferable for fixed final time onboard optimal guidance algorithm. It has also been observed that the minimum fuel requirement strongly depends on the choice of the final time (a critical point that is not given due importance in many literature). Hence, to optimally select the final time, a neural network is used to learn the mapping between various initial conditions in the domain of interest and the corresponding optimal flight time. To generate the training data set, the optimal final time is computed offline using a gradient based optimization technique. The effectiveness of the proposed method is demonstrated with rigorous simulation results.