960 resultados para Steam-engines.
Resumo:
Fuel cell-based automobiles have gained attention in the last few years due to growing public concern about urban air pollution and consequent environmental problems. From an analysis of the power and energy requirements of a modern car, it is estimated that a base sustainable power of ca. 50 kW supplemented with short bursts up to 80 kW will suffice in most driving requirements. The energy demand depends greatly on driving characteristics but under normal usage is expected to be 200 Wh/km. The advantages and disadvantages of candidate fuel-cell systems and various fuels are considered together with the issue of whether the fuel should be converted directly in the fuel cell or should be reformed to hydrogen onboard the vehicle. For fuel cell vehicles to compete successfully with conventional internal-combustion engine vehicles, it appears that direct conversion fuel cells using probably hydrogen, but possibly methanol, are the only realistic contenders for road transportation applications. Among the available fuel cell technologies, polymer-electrolyte fuel cells directly fueled with hydrogen appear to be the best option for powering fuel cell vehicles as there is every prospect that these will exceed the performance of the internal-combustion engine vehicles but for their first cost. A target cost of $ 50/kW would be mandatory to make polymer-electrolyte fuel cells competitive with the internal combustion engines and can only be achieved with design changes that would substantially reduce the quantity of materials used. At present, prominent car manufacturers are deploying important research and development efforts to develop fuel cell vehicles and are projecting to start production by 2005.
Resumo:
In pay-per click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their ads. This auction is typically conducted for a number of rounds (say T). There are click probabilities mu_ij associated with agent-slot pairs. The search engine's goal is to maximize social welfare, for example, the sum of values of the advertisers. The search engine does not know the true value of an advertiser for a click to her ad and also does not know the click probabilities mu_ij s. A key problem for the search engine therefore is to learn these during the T rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced and would be referred to as multi-armed-bandit (MAB) mechanisms. When m = 1,characterizations for truthful MAB mechanisms are available in the literature and it has been shown that the regret for such mechanisms will be O(T^{2/3}). In this paper, we seek to derive a characterization in the realistic but nontrivial general case when m > 1 and obtain several interesting results.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.
Resumo:
A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.
Resumo:
Hydrogen is a clean energy carrier and highest energy density fuel. Water gas shift (WGS) reaction is an important reaction to generate hydrogen from steam reforming of CO. A new WGS catalyst, Ce(1-x)Ru(x)O(2-delta) (0 <= x <= 0.1) was prepared by hydrothermal method using melamine as a complexing agent. The Catalyst does not require any pre-treatment. Among the several compositions prepared and tested, Ce(0.95)Ru(0.05)O(2-delta) (5% Ru(4+) ion substituted in CeO(2)) showed very high WGS activity in terms of high conversion rate (20.5 mu mol.g(-1).s(-1) at 275 degrees C) and low activation energy (12.1 kcal/mol). Over 99% conversion of CO to CO(2) by H(2)O is observed with 100% H(2) selectivity at >= 275 degrees C. In presence of externally fed CO(2) and H(2) also, complete conversion of CO to CO(2) was observed with 100% H(2) selectivity in the temperature range of 305-385 degrees C. Catalyst does not deactivate in long duration on/off WGS reaction cycle due to absence of surface carbon and carbonate formation and sintering of Ru. Due to highly acidic nature of Ru(4+) ion, surface carbonate formation is also inhibited. Sintering of noble metal (Ru) is avoided in this catalyst because Ru remains in Ru(4+) ionic state in the Ce(1-x)Ru(x)O(2-delta) catalyst.
Resumo:
Lime-fly ash mixtures are exploited for the manufacture of fly ash bricks finding applications in load bearing masonry. Lime-pozzolana reactions take place at a slow pace under ambient temperature conditions and hence very long curing durations are required to achieve meaningful strength values. The present investigation examines the improvements in strength development in lime-fly ash compacts through low temperature steam curing and use of additives like gypsum. Results of density-strength-moulding water content relationships, influence of lime-fly ash ratio, steam curing and role of gypsum on strength development, and characteristics of compacted lime-fly ash-gypsum bricks have been discussed. The test results reveal that (a) strength increases with increase in density irrespective of lime content, type of curing and moulding water content, (b) optimum lime-fly ash ratio yielding maximum strength is about 0.75 in the normal curing conditions, (c) 24 h of steam curing (at 80A degrees C) is sufficient to achieve nearly possible maximum strength, (d) optimum gypsum content yielding maximum compressive strength is at 2%, (e) with gypsum additive it is possible to obtain lime-fly ash bricks or blocks having sufficient strength (> 10 MPa) at 28 days of normal wet burlap curing.
Resumo:
The present work describes steady and unsteady computation of reacting flow in a Trapped Vortex Combustor. The primary motivation of this study is to develop this concept into a working combustor in modern gas turbines. The present work is an effort towards development of an experimental model test rig for further understanding dynamics of a single cavity trapped vortex combustor. The steady computations with and without combustion have been done for L/D of 0.8, 1 and 1.2; also unsteady non-reacting flow simulation has been done for L/D of 1. Fuel used for the present study is methane and Eddy-Dissipation model has been used for combustion-turbulence interactions. For L/D of 0.8, combustion efficiency is maximum and pattern factor is minimum. Also, primary vortex in the cavity is more stable and symmetric for L/D of 0.8. From unsteady non-reacting flow simulations, it is found that there is no vortex shedding from the cavity but there are oscillations in the span-wise direction of the combustor.
Resumo:
There are deficiencies in current definition of thermodynamic efficiency of fuel cells (ηcth = ΔG/ΔH); efficiency greater than unity is obtained when AS for the cell reaction is positive, and negative efficiency is obtained for endothermic reactions. The origin of the flow is identified. A new definition of thennodynamic efficiency is proposed that overcomes these limitations. Consequences of the new definition are examined. Against the conventional view that fuel cells are not Carnot limited, several recent articles have argued that the second law of thermodynamics restricts fuel cell energy conversion in the same way as heat engines. This controversy is critically examined. A resolution is achieved in part from an understanding of the contextual assumptions in the different approaches and in part from identifying some conceptual limitations.
Resumo:
This article presents the studies conducted on turbocharged producer gas engines designed originally for natural gas (NG) as the fuel. Producer gas, whose properties like stoichiometric ratio, calorific value, laminar flame speed, adiabatic flame temperature, and related parameters that differ from those of NG, is used as the fuel. Two engines having similar turbochargers are evaluated for performance. Detailed measurements on the mass flowrates of fuel and air, pressures and temperatures at various locations on the turbocharger were carried out. On both the engines, the pressure ratio across the compressor was measured to be 1.40 +/- 0.05 and the density ratio to be 1.35 +/- 0.05 across the turbocharger with after-cooler. Thermodynamic analysis of the data on both the engines suggests a compressor efficiency of 70 per cent. The specific energy consumption at the peak load is found to be 13.1 MJ/kWh with producer gas as the fuel. Compared with the naturally aspirated mode, the mass flow and the peak load in the turbocharged after-cooled condition increased by 35 per cent and 30 per cent, respectively. The pressure ratios obtained with the use of NG and producer gas are compared with corrected mass flow on the compressor map.
Resumo:
Thermoacoustic engines convert heat energy into high amplitude sound waves, which is used to drive thermoacoustic refrigerator or pulse tube cryocoolers by replacing the mechanical pistons such as compressors. The increasing interest in thermoacoustic technology is of its potentiality of no exotic materials, low cost and high reliability compared to vapor compression refrigeration systems. The experimental setup has been built based on the linear thermoacoustic model and some simple design parameters. The engines produce acoustic energy at the temperature difference of 325-450 K imposed along the stack of the system. This work illustrates the influence of stack parameters such as plate thickness (PT) and plate spacing (PS) with resonator length on the performance of thermoacoustic engine, which are measured in terms of onset temperature difference, resonance frequency and pressure amplitude using air as a working fluid. The results obtained from the experiments are in good agreement with the theoretical results from DeltaEc. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]
Resumo:
Thermoacoustic engines are energy conversion devices that convert thermal energy from a high-temperature heat source into useful work in the form of acoustic power while diverting waste heat into a cold sink; it can be used as a drive for cryocoolers and refrigerators. Though the devices are simple to fabricate, it is very challenging to design an optimized thermoacoustic primemover with better performance. The study presented here aims to optimize the thermoacoustic primemover using response surface methodology. The influence of stack position and its length, resonator length, plate thickness, and plate spacing on pressure amplitude and frequency in a thermoacoustic primemover is investigated in this study. For the desired frequency of 207 Hz, the optimized value of the above parameters suggested by the response surface methodology has been conducted experimentally, and simulations are also performed using DeltaEC. The experimental and simulation results showed similar output performance.
Resumo:
In pay-per-click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their advertisements (ads for short). A sponsored search auction for a keyword is typically conducted for a number of rounds (say T). There are click probabilities mu(ij) associated with each agent slot pair (agent i and slot j). The search engine would like to maximize the social welfare of the advertisers, that is, the sum of values of the advertisers for the keyword. However, the search engine does not know the true values advertisers have for a click to their respective advertisements and also does not know the click probabilities. A key problem for the search engine therefore is to learn these click probabilities during the initial rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced. These mechanisms, due to their connection to the multi-armed bandit problem, are aptly referred to as multi-armed bandit (MAB) mechanisms. When m = 1, exact characterizations for truthful MAB mechanisms are available in the literature. Recent work has focused on the more realistic but non-trivial general case when m > 1 and a few promising results have started appearing. In this article, we consider this general case when m > 1 and prove several interesting results. Our contributions include: (1) When, mu(ij)s are unconstrained, we prove that any truthful mechanism must satisfy strong pointwise monotonicity and show that the regret will be Theta T7) for such mechanisms. (2) When the clicks on the ads follow a certain click precedence property, we show that weak pointwise monotonicity is necessary for MAB mechanisms to be truthful. (3) If the search engine has a certain coarse pre-estimate of mu(ij) values and wishes to update them during the course of the T rounds, we show that weak pointwise monotonicity and type-I separatedness are necessary while weak pointwise monotonicity and type-II separatedness are sufficient conditions for the MAB mechanisms to be truthful. (4) If the click probabilities are separable into agent-specific and slot-specific terms, we provide a characterization of MAB mechanisms that are truthful in expectation.
Resumo:
Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.