963 resultados para War, Cost of


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fuel cell-based automobiles have gained attention in the last few years due to growing public concern about urban air pollution and consequent environmental problems. From an analysis of the power and energy requirements of a modern car, it is estimated that a base sustainable power of ca. 50 kW supplemented with short bursts up to 80 kW will suffice in most driving requirements. The energy demand depends greatly on driving characteristics but under normal usage is expected to be 200 Wh/km. The advantages and disadvantages of candidate fuel-cell systems and various fuels are considered together with the issue of whether the fuel should be converted directly in the fuel cell or should be reformed to hydrogen onboard the vehicle. For fuel cell vehicles to compete successfully with conventional internal-combustion engine vehicles, it appears that direct conversion fuel cells using probably hydrogen, but possibly methanol, are the only realistic contenders for road transportation applications. Among the available fuel cell technologies, polymer-electrolyte fuel cells directly fueled with hydrogen appear to be the best option for powering fuel cell vehicles as there is every prospect that these will exceed the performance of the internal-combustion engine vehicles but for their first cost. A target cost of $ 50/kW would be mandatory to make polymer-electrolyte fuel cells competitive with the internal combustion engines and can only be achieved with design changes that would substantially reduce the quantity of materials used. At present, prominent car manufacturers are deploying important research and development efforts to develop fuel cell vehicles and are projecting to start production by 2005.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is concerned with the dynamic analysis of flexible,non-linear multi-body beam systems. The focus is on problems where the strains within each elastic body (beam) remain small. Based on geometrically non-linear elasticity theory, the non-linear 3-D beam problem splits into either a linear or non-linear 2-D analysis of the beam cross-section and a non-linear 1-D analysis along the beam reference line. The splitting of the three-dimensional beam problem into two- and one-dimensional parts, called dimensional reduction,results in a tremendous savings of computational effort relative to the cost of three-dimensional finite element analysis,the only alternative for realistic beams. The analysis of beam-like structures made of laminated composite materials requires a much more complicated methodology. Hence, the analysis procedure based on Variational Asymptotic Method (VAM), a tool to carry out the dimensional reduction, is used here.The analysis methodology can be viewed as a 3-step procedure. First, the sectional properties of beams made of composite materials are determined either based on an asymptotic procedure that involves a 2-D finite element nonlinear analysis of the beam cross-section to capture trapeze effect or using strip-like beam analysis, starting from Classical Laminated Shell Theory (CLST). Second, the dynamic response of non-linear, flexible multi-body beam systems is simulated within the framework of energy-preserving and energy-decaying time integration schemes that provide unconditional stability for non-linear beam systems. Finally,local 3-D responses in the beams are recovered, based on the 1-D responses predicted in the second step. Numerical examples are presented and results from this analysis are compared with those available in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on the an earlier CFD analysis of the performance of the gas-dynamically controlled laser cavity [1]it was found that there is possibility of optimizing the geometry of the diffuser that can bring about reductions in both size and cost of the system by examining the critical dimensional requirements of the diffuser. Consequently,an extensive CFD analysis has been carried out for a range of diffuser configurations by simulating the supersonic flow through the arrangement including the laser cavity driven by a bank of converging – diverging nozzles and the diffuser. The numerical investigations with 3D-RANS code are carried out to capture the flow patterns through diffusers past the cavity that has multiple supersonic jet interactions with shocks leading to complex flow pattern. Varying length of the diffuser plates is made to be the basic parameter of the study. The analysis reveals that the pressure recovery pattern during the flow through the diffuser from the simulation, being critical for the performance of the laser device shows its dependence on the diffuser length is weaker beyond a critical lower limit and this evaluation of this limit would provide a design guideline for a more efficient system configuration.The observation based on the parametric study shows that the pressure recovery transients in the near vicinity of the cavity is not affected for the reduction in the length of the diffuser plates up to its 10% of the initial size, indicating the design in the first configuration that was tested experimentally has a large factor of margin. The flow stability in the laser cavity is found to be unaffected since a strong and stable shock is located at the leading edge of the diffuser plates while the downstream shock and flow patterns are changed, as one would expect. Results of the study for the different lengths of diffusers in the range of 10% to its full length are presented, keeping the experimentally tested configuration used in the earlier study [1] as the reference length. The conclusions drawn from the analysis is found to be of significance since it provides new design considerations based on the understanding of the intricacies of the flow, allowing for a hardware optimization that can lead to substantial size reduction of the device with no loss of performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of scheduling of a wireless channel (server) to several queues. Each queue has its own link (transmission) rate. The link rate of a queue can vary randomly from slot to slot. The queue lengths and channel states of all users are known at the beginning of each slot. We show the existence of an optimal policy that minimizes the long term (discounted) average sum of queue lengths. The optimal policy, in general needs to be computed numerically. Then we identify a greedy (one step optimal) policy, MAX-TRANS which is easy to implement and does not require the channel and traffic statistics. The cost of this policy is close to optimal and better than other well-known policies (when stable) although it is not throughput optimal for asymmetric systems. We (approximately) identify its stability region and obtain approximations for its mean queue lengths and mean delays. We also modify this policy to make it throughput optimal while retaining good performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The physical chemistry of "aluminothermic" reduction of calcium oxide in vacuum is analyzed. Basic thermodynamic data required for the analysis have been generated by a variety of experiments. These include activity measurements in liquid AI-Ca alloys and determination of the Gibbs energies of formation of calcium aluminates. These data have been correlated with phase relations in the Ca-AI-0 system at 1373 K. The various stages of reduction, the end products and the corresponding equilibrium partial pressures of calcium have been established from thermodynamic considerations. In principle, the recovery of calcium can be improved by reducing the pressure in the reactor. However,, the cost of a high vacuum system and the enhanced time for reduction needed to achieve higher yields makes such a practice uneconomic. Aluminum contamination of calcium also increases at low pressures. The best compromise is to carry the reduction up to the stage where 3CaO-Al,O, is formed as the product. This corresponds to an equilibrium calcium partial pressure of 31.3 Pa at 1373 K and 91.6 Pa at 1460 K. Calcium can be extracted at this pressure using mechanical pumps in approximately 8 to 15 hr, depending on the size and the fill ratio of the retort and porosity of the charge briquettes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body that undergoes such undulatory motions. In the angulliform mode, or the eel type, the entire body undergoes undulatory motions in the form of a travelling wave that goes from head to tail, while in the other extreme case, the thunniform mode, only the rear tail (caudal fin) undergoes lateral oscillations. The thunniform mode of swimming is essentially based on the lift force generated by the airfoil like crosssection of the fish tail as it moves laterally through the water, while the anguilliform mode may be understood using the “reactive theory” of Lighthill. In pulsed jet propulsion, adopted by squids and salps, there are two components to the thrust; the first due to the familiar ejection of momentum and the other due to an over-pressure at the exit plane caused by the unsteadiness of the jet. The flow immediately downstream of the body in all three modes consists of vortex rings; the differentiating point being the vastly different orientations of the vortex rings. However, since all the bodies are self-propelling, the thrust force must be equal to the drag force (at steady speed), implying no net force on the body, and hence the wake or flow downstream must be momentumless. For such bodies, where there is no net force, it is difficult to directly define a propulsion efficiency, although it is possible to use some other very different measures like “cost of transportation” to broadly judge performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To reduce the cost of disposal of large quantities of fly ash generated and environmental problems associated with it, efforts are made to utilize fly ash for geotechnical applications. Geotechnical properties of fly ash play a key role in enhancing their application. Physical properties and chemical composition control the index properties arid engineering behaviour. The paper brings out the rob of surface area, surface characteristics, reactive silica and lime content of fly ashes on index, compaction, consolidation and strength properties of fly ashes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a new proportional-navigation guidance law, called retro-proportional-navigation, is proposed. The guidance law is designed to intercept targets that are of higher speeds than the interceptor. This is a typical scenario in a ballistic target interception. The capture region analysis for both proportional-navigation and retro-proportional-navigation guidance laws are presented. The study shows that, at the cost of a higher intercept time, the retro-proportional-navigation guidance law demands lower terminal lateral acceleration than proportional navigation and can intercept high-velocity targets from many initial conditions that the classical proportional navigation cannot. Also, the capture region with the retro-proportional-navigation guidance law is shown to be larger compared with the classical proportional-navigation guidance law.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When people drink water having a fluoride (F-) concentration >1-1.5 mg/L for a long period of time, various ailments that are collectively referred to as fluorosis occur. Based on the design of Thomas (http://www.planetkerala.org), an inclined basin-type solar still containing sand and water has been used at Bangalore for defluoridation. For water samples having a fluoride concentration in the range 5-20 mg/L, the fluoride concentration in the distillate was usually <1.5 mg/L. During the periods October 2006 May 2007 and October 2007 May 2008, the volume of distillate showed a significant diurnal variation, ranging from 0.3 to 4.0 L/m(2).day. Based on the figures for 2006, the cost of the still was about Rs. 850 (US$16) for collector areas in the range 0.50-0.57 m(2). The occurrence of F- in the distillate merits further investigation. Overall, the still effectively removes F-, but a large area of the collector, in the range 2.5-25 m(2), is needed to produce about 10 L of distilled water for cooking and drinking. Rainwater falling on the upper surface of the still was collected, and its fluoride concentration was found to be below the desirable limit of 1 mg/L. Hence it can also be used for cooking and drinking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we estimate the solution of the electromigration diffusion equation (EMDE) in isotopically pure and impure metallic single-walled carbon nanotubes (CNTs) (SWCNTs) by considering self-heating. The EMDE for SWCNT has been solved not only by invoking the dependence of the electromigration flux on the usual applied static electric field across its two ends but also by considering a temperature-dependent thermal conductivity (κ) which results in a variable temperature distribution (T) along its length due to self-heating. By changing its length and isotopic impurity, we demonstrate that there occurs a significant deviation in the SWCNT electromigration performance. However, if κ is assumed to be temperature independent, the solution may lead to serious errors in performance estimation. We further exhibit a tradeoff between length and impurity effect on the performance toward electromigration. It is suggested that, to reduce the vacancy concentration in longer interconnects of few micrometers, one should opt for an isotopically impure SWCNT at the cost of lower κ, whereas for comparatively short interconnects, pure SWCNT should be used. This tradeoff presented here can be treated as a way for obtaining a fairly well estimation of the vacancy concentration and mean time to failure in the bundles of CNT-based interconnects. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ground management problems are typically solved by the simulation-optimization approach where complex numerical models are used to simulate the groundwater flow and/or contamination transport. These numerical models take a lot of time to solve the management problems and hence become computationally expensive. In this study, Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) models were developed and coupled for the management of groundwater of Dore river basin in France. The Analytic Element Method (AEM) based flow model was developed and used to generate the dataset for the training and testing of the ANN model. This developed ANN-PSO model was applied to minimize the pumping cost of the wells, including cost of the pipe line. The discharge and location of the pumping wells were taken as the decision variable and the ANN-PSO model was applied to find out the optimal location of the wells. The results of the ANN-PSO model are found similar to the results obtained by AEM-PSO model. The results show that the ANN model can reduce the computational burden significantly as it is able to analyze different scenarios, and the ANN-PSO model is capable of identifying the optimal location of wells efficiently.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ceramic/Porcelain insulators are widely used in power transmission lines to provide mechanical support for High voltage conductors in addition to withstand electrical stresses. As a result of lightning, switching or temporary over voltages that could initiate flashover under worst weather conditions, and to operate within interference limits. Given that the useful life in service of the individual insulator elements making up the insulator strings is hard to predict, they must be verified periodically to ensure that adequate line reliability is maintained at all times. Over the years utilities have adopted few methods to detect defective discs in a string, subsequently replacement of the faulty discs are being carried out for smooth operation. But, if the insulator is found to be defective in a string at some location that may not create any changes in the field configuration, there is no need to replace to avoid manpower and cost of replacement. Due to deficiency of electric field data for the existing string configuration, utilities are forced to replace the discs which may not be essentially required. Hence, effort is made in the present work to simulate the potential and electric field along the normal and with faults induced discs in a string up to 765 kV system voltages using Surface Charge Simulation Method (SCSM). A comparison is made between simulated results, experimental and field data and it was found that the computed results are quite acceptable and useful.