54 resultados para Cost of debt

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

People in many countries are affected by fluorosis owing to the high levels of fluoride in drinking water. An inexpensive method for estimating the concentration of the fluoride ion in drinking water would be helpful in identifying safe sources of water and also in monitoring the performance of defluoridation techniques. For this purpose, a simple, inexpensive, and portable colorimeter has been developed in the present work. It is used in conjunction with the SPADNS method, which shows a color change in the visible region on addition of water containing fluoride to a reagent solution. Groundwater samples were collected from different parts of the state of Karnataka, India and analysed for fluoride. The results obtained using the colorimeter and the double beam spectrophotometer agreed fairly well. The costs of the colorimeter and of the chemicals required per test were about Rs. 250 (US$ 5) and Rs. 2.5 (US$ 0.05), respectively. In addition, the cost of the chemicals required for constructing the calibration curve was about Rs. 15 (US$ 0.3). (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthesis of cost-optimal shell-and-tube heat exchangers is a difficult task since it involves a large number of parameters. An attempt is made in this article to simplify the process of choosing the parameter values that will minimize the cost of any heat exchanger satisfying a given heat duty and a particular set of constraints. The simplification is based on decoupling of the geometric and the thermal aspects of the problem. The concept of curves for cost-optimal design is introduced and is shown to simplify the synthesis process for shell-and-tube heat exchangers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy harvesting sensor nodes are gaining popularity due to their ability to improve the network life time and are becoming a preferred choice supporting green communication. In this paper, we focus on communicating reliably over an additive white Gaussian noise channel using such an energy harvesting sensor node. An important part of this paper involves appropriate modeling of energy harvesting, as done via various practical architectures. Our main result is the characterization of the Shannon capacity of the communication system. The key technical challenge involves dealing with the dynamic (and stochastic) nature of the (quadratic) cost of the input to the channel. As a corollary, we find close connections between the capacity achieving energy management policies and the queueing theoretic throughput optimal policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a combination of technologies to provide an Energy-on-Demand (EoD) service to enable low cost innovation suitable for microgrid networks. The system is designed around the low cost and simple Rural Energy Device (RED) Box which in combination with Short Message Service (SMS) communication methodology serves as an elementary proxy for Smart meters which are typically used in urban settings. Further, customer behavior and familiarity in using such devices based on mobile experience has been incorporated into the design philosophy. Customers are incentivized to interact with the system thus providing valuable behavioral and usage data to the Utility Service Provider (USP). Data that is collected over time can be used by the USP for analytics envisioned by using remote computing services known as cloud computing service. Cloud computing allows for a sharing of computational resources at the virtual level across several networks. The customer-system interaction is facilitated by a third party Telecom Service provider (TSP). The approximate cost of the RED Box is envisaged to be under USD 10 on production scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predation risk can strongly constrain how individuals use time and space. Grouping is known to reduce an individual's time investment in costly antipredator behaviours. Whether grouping might similarly provide a spatial release from antipredator behaviour and allow individuals to use risky habitat more and, thus, improve their access to resources is poorly known. We used mosquito larvae, Aedes aegypti, to test the hypothesis that grouping facilitates the use of high-risk habitat. We provided two habitats, one darker, low-risk and one lighter, high-risk, and measured the relative time spent in the latter by solitary larvae versus larvae in small groups. We tested larvae reared under different resource levels, and thus presumed to vary in body condition, because condition is known to influence risk taking. We also varied the degree of contrast in habitat structure. We predicted that individuals in groups should use high-risk habitat more than solitary individuals allowing for influences of body condition and contrast in habitat structure. Grouping strongly influenced the time spent in the high-risk habitat, but, contrary to our expectation, individuals in groups spent less time in the high-risk habitat than solitary individuals. Furthermore, solitary individuals considerably increased the proportion of time spent in the high-risk habitat over time, whereas individuals in groups did not. Both solitary individuals and those in groups showed a small increase over time in their use of riskier locations within each habitat. The differences between solitary individuals and those in groups held across all resource and contrast conditions. Grouping may, thus, carry a poorly understood cost of constraining habitat use. This cost may arise because movement traits important for maintaining group cohesion (a result of strong selection on grouping) can act to exaggerate an individual preference for low-risk habitat. Further research is needed to examine the interplay between grouping, individual movement and habitat use traits in environments heterogeneous in risk and resources. (C) 2015 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present here a calculation of the inertial mass of a moving vortex in cuprate superconductors. This is a poorly known basic quantity of obvious interest in vortex dynamics. The motion of a vortex causes a dipolar density distortion and an associated electric field which is screened. The energy cost of the density distortion as well as the related screened electric field contributes to the vortex mass, which is small because of efficient screening. As a preliminary, we present a discussion and calculation of the vortex mass using a microscopically derivable phase-only action functional for the far region which shows that the contribution from the far region is negligible and that most of it arises from the (small) core region of the vortex. A calculation based on a phenomenological Ginzburg-Landau functional is performed in the core region. Unfortunately such a calculation is unreliable; the reasons for it are discussed. A credible calculation of the vortex mass thus requires a fully microscopic non-coarse-grained theory. This is developed, and results are presented for an s-wave BCS-like gap, with parameters appropriate to the cuprates. The mass, about 0.5m(e) per layer, for a magnetic field along the c axis arises from deformation of quasiparticle states bound in the core and screening effects mentioned above. We discuss earlier results, possible extensions to d-wave symmetry, and observability of effects dependent on the inertial mass. [S0163-1829(97)05534-3].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new, generic method/model for multi-objective design optimization of laminated composite components using a novel multi-objective optimization algorithm developed on the basis of the Quantum behaved Particle Swarm Optimization (QPSO) paradigm. QPSO is a co-variant of the popular Particle Swarm Optimization (PSO) and has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; Failure Mechanism based Failure criteria, Maximum stress failure criteria and the Tsai-Wu Failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences as well as fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Also, the performance of QPSO is compared with the conventional PSO.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper recasts the multiple data path assignment problem solved by Torng and Wilhelm by the dynamic programming method [1] into a minimal covering problem following a switching theoretic approach. The concept of bus compatibility for the data transfers is used to obtain the various ways of interconnecting the circuit modules with the minimum number of buses that allow concurrent data transfers. These have been called the feasible solutions of the problem. The minimal cost solutions are obtained by assigning weights to the bus-compatible sets present in the feasible solutions. Minimization of the cost of the solution by increasing the number of buses is also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper recasts the multiple data path assignment problem solved by Torng and Wilhelm by the dynamic programming method [1] into a minimal covering problem following a switching theoretic approach. The concept of bus compatibility for the data transfers is used to obtain the various ways of interconnecting the circuit modules with the minimum number of buses that allow concurrent data transfers. These have been called the feasible solutions of the problem. The minimal cost solutions are obtained by assigning weights to the bus-compatible sets present in the feasible solutions. Minimization of the cost of the solution by increasing the number of buses is also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Under certain special conditions natural selection can be effective at the level of local populations, or demes. Such interpopulation selection will favor genotypes that reduce the probability of extinction of their parent population even at the cost of a lowered inclusive fitness. Such genotypes may be characterized by altruistic traits only in a viscous population, i.e., in a population in which neighbors tend to be closely related. In a non-viscous population the interpopulation selection will instead favor spiteful traits when the populations are susceptible to extinction through the overutilization of the habitat, and cooperative traits when it is the newly established populations that are in the greatest danger of extinction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wear of dies is a serious problem in the forging industry. The materials used for the dies are generally expensive steel alloys and the dies require costly heat treatment and surface finishing operations. Degeneration of the die profile implies rejection of forged components and necessitates resinking or replacement of the die. Measures which reduce wear of the die can therefore aid in the reduction of production costs. The work reported here is the first phase of a study of the causes of die wear in forging production where the batch size is small and the machine employed is a light hammer. This is a problem characteristic of the medium and small scale area of the forging industry where the cost of dies is a significant proportion of the total capital investment. For the same energy input and under unlubricated conditions, die wear has been found to be sensitive to forging temperature; in cold forging the yield strength of the die material is the prime factor governing the degeneration of the die profile, whilst in hot forging the wear resistance of the die material is the main factor which determines the rate of die wear. At an intermediate temperature, such as that characteristic of warm forging, the die wear is found to be less than that in both cold and hot forging. This preliminary study therefore points to the fact that the forging temperature must be taken into account in the selection of die material. Further, the forging industry must take serious note of the warm forging process, as it not only provides good surface finish, as claimed by many authors, but also has an inherent tendency to minimize die wear.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Submergence of land is a major impact of large hydropower projects. Such projects are often also dogged by siltation, delays in construction and heavy debt burdens-factors that are not considered in the project planning exercise. A simple constrained optimization model for the benefit~ost analysis of large hydropower projects that considers these features is proposed. The model is then applied to two sites in India. Using the potential productivity of an energy plantation on the submergible land is suggested as a reasonable approach to estimating the opportunity cost of submergence. Optimum project dimensions are calculated for various scenarios. Results indicate that the inclusion of submergence cost may lead to a substanual reduction in net present value and hence in project viability. Parameters such as project lifespan, con$truction time, discount rate and external debt burden are also of significance. The designs proposed by the planners are found to be uneconomic, whIle even the optimal design may not be viable for more typical scenarios. The concept of energy opportunity cost is useful for preliminary screening; some projects may require more detailed calculations. The optimization approach helps identify significant trade-offs between energy generation and land availability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.