955 resultados para Traction-engines
Resumo:
The removal of noise and outliers from measurement signals is a major problem in jet engine health monitoring. Topical measurement signals found in most jet engines include low rotor speed, high rotor speed. fuel flow and exhaust gas temperature. Deviations in these measurements from a baseline 'good' engine are often called measurement deltas and the health signals used for fault detection, isolation, trending and data mining. Linear filters such as the FIR moving average filter and IIR exponential average filter are used in the industry to remove noise and outliers from the jet engine measurement deltas. However, the use of linear filters can lead to loss of critical features in the signal that can contain information about maintenance and repair events that could be used by fault isolation algorithms to determine engine condition or by data mining algorithms to learn valuable patterns in the data, Non-linear filters such as the median and weighted median hybrid filters offer the opportunity to remove noise and gross outliers from signals while preserving features. In this study. a comparison of traditional linear filters popular in the jet engine industry is made with the median filter and the subfilter weighted FIR median hybrid (SWFMH) filter. Results using simulated data with implanted faults shows that the SWFMH filter results in a noise reduction of over 60 per cent compared to only 20 per cent for FIR filters and 30 per cent for IIR filters. Preprocessing jet engine health signals using the SWFMH filter would greatly improve the accuracy of diagnostic systems. (C) 2002 Published by Elsevier Science Ltd.
Resumo:
Wear of metals in dry sliding is dictated by the material response to traction. This is demonstrated by considering the wear of aluminium and titanium alloys. In a regime of stable homogeneous deformation the material approaching the surface from the bulk passes through microprocessing zones of flow, fracture, comminution and compaction to generate a protective tribofilm that retains the interaction in the mild wear regime. If the response leads to microstructural instabilities such as adiabatic shear bands, the near-surface zone consists of stacks of 500 nm layers situated parallel to the sliding direction. Microcracks are generated below the surface to propagate normally away from the surface though microvoids situated in the layers, until it reaches a depth of 10-20 mum. A rectangular laminate debris consisting of a 20-40 layer stack is produced, The wear in this mode is severe.
Resumo:
A continuum model based on the critical-state theory of soil mechanics is used to generate stress, density, and velocity profiles, and to compute discharge rates for the flow of granular material in a mass flow bunker. The bin–hopper transition region is idealized as a shock across which all the variables change discontinuously. Comparison with the work of Michalowski (1987) shows that his experimentally determined rupture layer lies between his prediction and that of the present theory. However, it resembles the former more closely. The conventional condition involving a traction-free surface at the hopper exit is abandoned in favour of an exit shock below which the material falls vertically with zero frictional stress. The basic equations, which are not classifiable under any of the standard types, require excessive computational time. This problem is alleviated by the introduction of the Mohr–Coulomb approximation (MCA). The stress, density, and velocity profiles obtained by integration of the MCA converge to asymptotic fields on moving down the hopper. Expressions for these fields are derived by a perturbation method. Computational difficulties are encountered for bunkers with wall angles θw [gt-or-equal, slanted] 15° these are overcome by altering the initial conditions. Predicted discharge rates lie significantly below the measured values of Nguyen et al. (1980), ranging from 38% at θw = 15° to 59% at θw = 32°. The poor prediction appears to be largely due to the exit condition used here. Paradoxically, incompressible discharge rates lie closer to the measured values. An approximate semi-analytical expression for the discharge rate is obtained, which predicts values within 9% of the exact (numerical) ones in the compressible case, and 11% in the incompressible case. The approximate analysis also suggests that inclusion of density variation decreases the discharge rate. This is borne out by the exact (numerical) results – for the parameter values investigated, the compressible discharge rate is about 10% lower than the incompressible value. A preliminary comparison of the predicted density profiles with the measurements of Fickie et al. (1989) shows that the material within the hopper dilates more strongly than predicted. Surprisingly, just below the exit slot, there is good agreement between theory and experiment.
Resumo:
Fuel cell-based automobiles have gained attention in the last few years due to growing public concern about urban air pollution and consequent environmental problems. From an analysis of the power and energy requirements of a modern car, it is estimated that a base sustainable power of ca. 50 kW supplemented with short bursts up to 80 kW will suffice in most driving requirements. The energy demand depends greatly on driving characteristics but under normal usage is expected to be 200 Wh/km. The advantages and disadvantages of candidate fuel-cell systems and various fuels are considered together with the issue of whether the fuel should be converted directly in the fuel cell or should be reformed to hydrogen onboard the vehicle. For fuel cell vehicles to compete successfully with conventional internal-combustion engine vehicles, it appears that direct conversion fuel cells using probably hydrogen, but possibly methanol, are the only realistic contenders for road transportation applications. Among the available fuel cell technologies, polymer-electrolyte fuel cells directly fueled with hydrogen appear to be the best option for powering fuel cell vehicles as there is every prospect that these will exceed the performance of the internal-combustion engine vehicles but for their first cost. A target cost of $ 50/kW would be mandatory to make polymer-electrolyte fuel cells competitive with the internal combustion engines and can only be achieved with design changes that would substantially reduce the quantity of materials used. At present, prominent car manufacturers are deploying important research and development efforts to develop fuel cell vehicles and are projecting to start production by 2005.
Resumo:
In pay-per click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their ads. This auction is typically conducted for a number of rounds (say T). There are click probabilities mu_ij associated with agent-slot pairs. The search engine's goal is to maximize social welfare, for example, the sum of values of the advertisers. The search engine does not know the true value of an advertiser for a click to her ad and also does not know the click probabilities mu_ij s. A key problem for the search engine therefore is to learn these during the T rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced and would be referred to as multi-armed-bandit (MAB) mechanisms. When m = 1,characterizations for truthful MAB mechanisms are available in the literature and it has been shown that the regret for such mechanisms will be O(T^{2/3}). In this paper, we seek to derive a characterization in the realistic but nontrivial general case when m > 1 and obtain several interesting results.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.
Resumo:
A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.
Resumo:
The present work describes steady and unsteady computation of reacting flow in a Trapped Vortex Combustor. The primary motivation of this study is to develop this concept into a working combustor in modern gas turbines. The present work is an effort towards development of an experimental model test rig for further understanding dynamics of a single cavity trapped vortex combustor. The steady computations with and without combustion have been done for L/D of 0.8, 1 and 1.2; also unsteady non-reacting flow simulation has been done for L/D of 1. Fuel used for the present study is methane and Eddy-Dissipation model has been used for combustion-turbulence interactions. For L/D of 0.8, combustion efficiency is maximum and pattern factor is minimum. Also, primary vortex in the cavity is more stable and symmetric for L/D of 0.8. From unsteady non-reacting flow simulations, it is found that there is no vortex shedding from the cavity but there are oscillations in the span-wise direction of the combustor.
Resumo:
There are deficiencies in current definition of thermodynamic efficiency of fuel cells (ηcth = ΔG/ΔH); efficiency greater than unity is obtained when AS for the cell reaction is positive, and negative efficiency is obtained for endothermic reactions. The origin of the flow is identified. A new definition of thennodynamic efficiency is proposed that overcomes these limitations. Consequences of the new definition are examined. Against the conventional view that fuel cells are not Carnot limited, several recent articles have argued that the second law of thermodynamics restricts fuel cell energy conversion in the same way as heat engines. This controversy is critically examined. A resolution is achieved in part from an understanding of the contextual assumptions in the different approaches and in part from identifying some conceptual limitations.
Resumo:
This article presents the studies conducted on turbocharged producer gas engines designed originally for natural gas (NG) as the fuel. Producer gas, whose properties like stoichiometric ratio, calorific value, laminar flame speed, adiabatic flame temperature, and related parameters that differ from those of NG, is used as the fuel. Two engines having similar turbochargers are evaluated for performance. Detailed measurements on the mass flowrates of fuel and air, pressures and temperatures at various locations on the turbocharger were carried out. On both the engines, the pressure ratio across the compressor was measured to be 1.40 +/- 0.05 and the density ratio to be 1.35 +/- 0.05 across the turbocharger with after-cooler. Thermodynamic analysis of the data on both the engines suggests a compressor efficiency of 70 per cent. The specific energy consumption at the peak load is found to be 13.1 MJ/kWh with producer gas as the fuel. Compared with the naturally aspirated mode, the mass flow and the peak load in the turbocharged after-cooled condition increased by 35 per cent and 30 per cent, respectively. The pressure ratios obtained with the use of NG and producer gas are compared with corrected mass flow on the compressor map.
Resumo:
Thermoacoustic engines convert heat energy into high amplitude sound waves, which is used to drive thermoacoustic refrigerator or pulse tube cryocoolers by replacing the mechanical pistons such as compressors. The increasing interest in thermoacoustic technology is of its potentiality of no exotic materials, low cost and high reliability compared to vapor compression refrigeration systems. The experimental setup has been built based on the linear thermoacoustic model and some simple design parameters. The engines produce acoustic energy at the temperature difference of 325-450 K imposed along the stack of the system. This work illustrates the influence of stack parameters such as plate thickness (PT) and plate spacing (PS) with resonator length on the performance of thermoacoustic engine, which are measured in terms of onset temperature difference, resonance frequency and pressure amplitude using air as a working fluid. The results obtained from the experiments are in good agreement with the theoretical results from DeltaEc. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]
Resumo:
Thermoacoustic engines are energy conversion devices that convert thermal energy from a high-temperature heat source into useful work in the form of acoustic power while diverting waste heat into a cold sink; it can be used as a drive for cryocoolers and refrigerators. Though the devices are simple to fabricate, it is very challenging to design an optimized thermoacoustic primemover with better performance. The study presented here aims to optimize the thermoacoustic primemover using response surface methodology. The influence of stack position and its length, resonator length, plate thickness, and plate spacing on pressure amplitude and frequency in a thermoacoustic primemover is investigated in this study. For the desired frequency of 207 Hz, the optimized value of the above parameters suggested by the response surface methodology has been conducted experimentally, and simulations are also performed using DeltaEC. The experimental and simulation results showed similar output performance.
Resumo:
In pay-per-click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their advertisements (ads for short). A sponsored search auction for a keyword is typically conducted for a number of rounds (say T). There are click probabilities mu(ij) associated with each agent slot pair (agent i and slot j). The search engine would like to maximize the social welfare of the advertisers, that is, the sum of values of the advertisers for the keyword. However, the search engine does not know the true values advertisers have for a click to their respective advertisements and also does not know the click probabilities. A key problem for the search engine therefore is to learn these click probabilities during the initial rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced. These mechanisms, due to their connection to the multi-armed bandit problem, are aptly referred to as multi-armed bandit (MAB) mechanisms. When m = 1, exact characterizations for truthful MAB mechanisms are available in the literature. Recent work has focused on the more realistic but non-trivial general case when m > 1 and a few promising results have started appearing. In this article, we consider this general case when m > 1 and prove several interesting results. Our contributions include: (1) When, mu(ij)s are unconstrained, we prove that any truthful mechanism must satisfy strong pointwise monotonicity and show that the regret will be Theta T7) for such mechanisms. (2) When the clicks on the ads follow a certain click precedence property, we show that weak pointwise monotonicity is necessary for MAB mechanisms to be truthful. (3) If the search engine has a certain coarse pre-estimate of mu(ij) values and wishes to update them during the course of the T rounds, we show that weak pointwise monotonicity and type-I separatedness are necessary while weak pointwise monotonicity and type-II separatedness are sufficient conditions for the MAB mechanisms to be truthful. (4) If the click probabilities are separable into agent-specific and slot-specific terms, we provide a characterization of MAB mechanisms that are truthful in expectation.