127 resultados para Impregnated Filter-paper
em Cambridge University Engineering Department Publications Database
Resumo:
In order to improve drilling mud design to cater for specific well situations, a more comprehensive knowledge and understanding of filter cake failure is needed. This paper describes experimental techniques aimed at directly probing the mechanical properties of filter cakes, without having to take into account artefacts due to fluid flow in the substrate. The use of rheometers allows us to determine shear yield stress and dynamic shear modulii of cakes grown on filter paper. A new scraping technique measures the strength and moisture profiles of typical filter cakes with a 0.1 mm resolution. This technique also allows us to probe the adhesion between the filter cake and its rock substrate. In addition, œdometer drained consolidation and unloading of a filter cake give us compression parameters useful for Cam Clay modelling. These independent measurements give similar results as to the elastic modulus of different filter cakes, showing an order of magnitude difference between water based and oil based cakes. We find that these standard cakes behave predominantly as purely elastic materials, with a sharp transition into plastic flow, allowing for the determination of a well-defined yield stress. The effect ofsolids loading on a given type of mud is also studied.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Optimal Bayesian multi-target filtering is in general computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency was proposed by Whiteley et. al. Numerical examples were presented for two scenarios, including a challenging nonlinear observation model, to support the claim. This paper studies the theoretical properties of this auxiliary particle implementation. $\mathbb{L}_p$ error bounds are established from which almost sure convergence follows.
Resumo:
A technique to measure wall flow variation in Diesel Particle Filters (DPFs) is described. In a recent paper, it was shown how the flow distribution in DPFs could be measured in a non-destructive manner. This involved measuring the progressive dilution of a tracer gas introduced at the "outlet" channel upstream end. In the present paper, a significant further improvement to this technique is described, in which only a single probe is required, rather than the two of the previous technique. The single, traversable, probe consists of a controllable flow sink, and slightly downstream, a tracer gas supply. By controlling the sink flow rate such that a very small concentration of tracer gas is aspirated into it, the total flow up to that location in the channel is determined. Typical results showing the axial variation in the wall flow for known wall blockage cases are presented. It is suggested that this technique could be used to interpret the soot loading in the filter channels in a non-intrusive way.
Resumo:
This paper reports on a switchable multi-band filter response achieved within a single micro-electro-mechanical device. A prototype device fabricated in a SOI process demonstrates a voltage programmable and tunable, dual-band, band-pass/band-stop response. Both analytical and finite element models are introduced in this paper to elucidate the operating principle of the filter and to guide filter design. Voltage programmability of the filter characteristic is demonstrated with the ability to independently tune the centre frequency and bandwidth for each band. A representative measurement shows that the minimum 3 dB-bandwidth (BW) is 155 Hz, 140Hz, and 20 dB-BW is 216 Hz, 203Hz for the upper-band and lower-band center frequencies located at 131.5 kHz and 130.7 kHz, respectively. © 2011 IEEE.
Resumo:
This paper deals with the experimental evaluation of a flow analysis system based on the integration between an under-resolved Navier-Stokes simulation and experimental measurements with the mechanism of feedback (referred to as Measurement-Integrated simulation), applied to the case of a planar turbulent co-flowing jet. The experiments are performed with inner-to-outer-jet velocity ratio around 2 and the Reynolds number based on the inner-jet heights about 10000. The measurement system is a high-speed PIV, which provides time-resolved data of the flow-field, on a field of view which extends to 20 jet heights downstream the jet outlet. The experimental data can thus be used both for providing the feedback data for the simulations and for validation of the MI-simulations over a wide region. The effect of reduced data-rate and spatial extent of the feedback (i.e. measurements are not available at each simulation time-step or discretization point) was investigated. At first simulations were run with full information in order to obtain an upper limit of the MI-simulations performance. The results show the potential of this methodology of reproducing first and second order statistics of the turbulent flow with good accuracy. Then, to deal with the reduced data different feedback strategies were tested. It was found that for small data-rate reduction the results are basically equivalent to the case of full-information feedback but as the feedback data-rate is reduced further the error increases and tend to be localized in regions of high turbulent activity. Moreover, it is found that the spatial distribution of the error looks qualitatively different for different feedback strategies. Feedback gain distributions calculated by optimal control theory are presented and proposed as a mean to make it possible to perform MI-simulations based on localized measurements only. So far, we have not been able to low error between measurements and simulations by using these gain distributions.
Resumo:
This paper advocates 'reduce, reuse, recycle' as a complete energy savings strategy. While reduction has been common to date, there is growing need to emphasize reuse and recycling as well. We design a DC-DC buck converter to demonstrate the 3 techniques: reduce with low-swing and zero voltage switching (ZVS), reuse with supply stacking, and recycle with regulated delivery of excess energy to the output load. The efficiency gained from these 3 techniques helps offset the loss of operating drivers at very high switching frequencies which are needed to move the output filter completely on-chip. A prototype was fabricated in 0.18μm CMOS, operates at 660MHz, and converts 2.2V to 0.75-1.0V at ∼50mA.1 © 2008 IEEE.
Resumo:
The design and manufacture of a prototype chip level power supply is described, with both simulated and experimental results. Of particular interest is the inclusion of a fully integrated on-chip LC filter. A high switching frequency of 660MHz and the design of a device drive circuit reduce losses by supply stacking, low-swing signaling and charge recycling. The paper demonstrates that a chip level converter operating at high frequency can be built and shows how this can be achieved, using zero voltage switching techniques similar to those commonly used in larger converters. Both simulations and experimental data from a fabricated circuit in 0.18μm CMOS are included. The circuit converts 2.2V to 0.75∼1.0V at ∼55mA. ©2008 IEEE.
Resumo:
This paper discusses user target intention recognition algorithms for pointing - clicking tasks to reduce users' pointing time and difficulty. Predicting targets by comparing the bearing angles to targets proposed as one of the first algorithms [1] is compared with a Kalman Filter prediction algorithm. Accuracy and sensitivity of prediction are used as performance criteria. The outcomes of a standard point and click experiment are used for performance comparison, collected from both able-bodied and impaired users. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
Variable selection for regression is a classical statistical problem, motivated by concerns that too large a number of covariates may bring about overfitting and unnecessarily high measurement costs. Novel difficulties arise in streaming contexts, where the correlation structure of the process may be drifting, in which case it must be constantly tracked so that selections may be revised accordingly. A particularly interesting phenomenon is that non-selected covariates become missing variables, inducing bias on subsequent decisions. This raises an intricate exploration-exploitation tradeoff, whose dependence on the covariance tracking algorithm and the choice of variable selection scheme is too complex to be dealt with analytically. We hence capitalise on the strength of simulations to explore this problem, taking the opportunity to tackle the difficult task of simulating dynamic correlation structures. © 2008 IEEE.