901 resultados para Filter-rectify-filter-model
Resumo:
A computer model was developed to simulate the cake formation and growth in cake filtration at an individual particle level. The model was shown to be able to generate structural information and quantify the cake thickness, average cake solidosity, filtrate volume, filtrate flowrate for constant pressure filtration or pressure drop across the filter unit for constant rate filtration as a function of filtration time. The effects of particle size distribution and key operational variables such as initial filtration flowrate, maximum pressure drop and initial solidosity were examined based on the simulated results. They are qualitatively comparable to those observed in physical experiments. The need for further development in simulation was also discussed. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
Kalman inverse filtering is used to develop a methodology for real-time estimation of forces acting at the interface between tyre and road on large off-highway mining trucks. The system model formulated is capable of estimating the three components of tyre-force at each wheel of the truck using a practical set of measurements and inputs. Good tracking is obtained by the estimated tyre-forces when compared with those simulated by an ADAMS virtual-truck model. A sensitivity analysis determines the susceptibility of the tyre-force estimates to uncertainties in the truck's parameters.
Resumo:
This paper reports preliminary progress on a principled approach to modelling nonstationary phenomena using neural networks. We are concerned with both parameter and model order complexity estimation. The basic methodology assumes a Bayesian foundation. However to allow the construction of pragmatic models, successive approximations have to be made to permit computational tractibility. The lowest order corresponds to the (Extended) Kalman filter approach to parameter estimation which has already been applied to neural networks. We illustrate some of the deficiencies of the existing approaches and discuss our preliminary generalisations, by considering the application to nonstationary time series.
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Using electricity load data and training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise and forgetting factors for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. We also find that a recently-proposed alternative novelty criterion, found to be more robust in stationary environments, does not fare so well in the non-stationary case due to the need for filter adaptability during training.
Resumo:
A sieve plate distillation column has been constructed and interfaced to a minicomputer with the necessary instrumentation for dynamic, estimation and control studies with special bearing on low-cost and noise-free instrumentation. A dynamic simulation of the column with a binary liquid system has been compiled using deterministic models that include fluid dynamics via Brambilla's equation for tray liquid holdup calculations. The simulation predictions have been tested experimentally under steady-state and transient conditions. The simulator's predictions of the tray temperatures have shown reasonably close agreement with the measured values under steady-state conditions and in the face of a step change in the feed rate. A method of extending linear filtering theory to highly nonlinear systems with very nonlinear measurement functional relationships has been proposed and tested by simulation on binary distillation. The simulation results have proved that the proposed methodology can overcome the typical instability problems associated with the Kalman filters. Three extended Kalman filters have been formulated and tested by simulation. The filters have been used to refine a much simplified model sequentially and to estimate parameters such as the unmeasured feed composition using information from the column simulation. It is first assumed that corrupted tray composition measurements are made available to the filter and then corrupted tray temperature measurements are accessed instead. The simulation results have demonstrated the powerful capability of the Kalman filters to overcome the typical hardware problems associated with the operation of on-line analyzers in relation to distillation dynamics and control by, in effect, replacirig them. A method of implementing estimator-aided feedforward (EAFF) control schemes has been proposed and tested by simulation on binary distillation. The results have shown that the EAFF scheme provides much better control and energy conservation than the conventional feedback temperature control in the face of a sustained step change in the feed rate or multiple changes in the feed rate, composition and temperature. Further extensions of this work are recommended as regards simulation, estimation and EAFF control.
Resumo:
This article examines whether UK portfolio returns are time varying so that expected returns follow an AR(1) process as proposed by Conrad and Kaul for the USA. It explores this hypothesis for four portfolios that have been formed on the basis of market capitalization. The portfolio returns are modelled using a kalman filter signal extraction model in which the unobservable expected return is the state variable and is allowed to evolve as a stationary first order autoregressive process. It finds that this model is a good representation of returns and can account for most of the autocorrelation present in observed portfolio returns. This study concludes that UK portfolio returns are time varying and the nature of the time variation appears to introduce a substantial amount of autocorrelation to portfolio returns. Like Conrad and Kaul if finds a link between the extent to which portfolio returns are time varying and the size of firms within a portfolio but not the monotonic one found for the USA.
Resumo:
In this article a partial-adjustment model, which shows how equity prices fail to adjust instantaneously to new information, is estimated using a Kalman filter. For the components of the Dow Jones Industrial 30 index I aim to identify whether overreaction or noise is the cause of serial correlation and high volatility associated with opening returns. I find that the tendency for overreaction in opening prices is much stronger than for closing prices; therefore, overreaction rather than noise may account for differences in the return behavior of opening and closing returns.
Resumo:
Purpose – A binary integer programming model for the simple assembly line balancing problem (SALBP), which is well known as SALBP-1, was formulated more than 30 years ago. Since then, a number of researchers have extended the model for the variants of assembly line balancing problem.The model is still prevalent nowadays mainly because of the lower and upper bounds on task assignment. These properties avoid significant increase of decision variables. The purpose of this paper is to use an example to show that the model may lead to a confusing solution. Design/methodology/approach – The paper provides a remedial constraint set for the model to rectify the disordered sequence problem. Findings – The paper presents proof that the assembly line balancing model formulated by Patterson and Albracht may lead to a confusing solution. Originality/value – No one previously has found that the commonly used model is incorrect.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (1st derivative) filter, or as zero-crossings in the 2nd derivative (ZCs). We tested those ideas using a stimulus that has no local peaks of gradient and no ZCs, at any scale. The stimulus profile is analogous to the Mach ramp, but it is the luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux; the luminance profile is a blurred triangle-wave. For all image-blurs tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These Mach edges correspond to peaks and troughs in the 3rd derivative. Thus Mach edges are inconsistent with many standard edge-detection schemes, but are nicely predicted by a recent model that finds edge points with a 2-stage sequence of 1st then 2nd derivative operators, each followed by a half-wave rectifier.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (first-derivative) filter, or as zero-crossings (ZCs) in the second-derivative. A variety of multi-scale models are based on this idea. We tested this approach by devising a stimulus that has no local peaks of gradient and no ZCs, at any scale. Our stimulus profile is analogous to the classic Mach-band stimulus, but it is the local luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux. The luminance profile is a smoothed triangle wave and is obtained by integrating the gradient profile. Subjects used a cursor to mark the position and polarity of perceived edges. For all the ramp-widths tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These new Mach edges correspond to peaks and troughs in the third-derivative. They are analogous to Mach bands - light and dark bars are seen where there are no luminance peaks but there are peaks in the second derivative. Here, peaks in the third derivative were seen as light-to-dark edges, troughs as dark-to-light edges. Thus Mach edges are inconsistent with many standard edge detectors, but are nicely predicted by a new model that uses a (nonlinear) third-derivative operator to find edge points.
Resumo:
Marr's work offered guidelines on how to investigate vision (the theory - algorithm - implementation distinction), as well as specific proposals on how vision is done. Many of the latter have inevitably been superseded, but the approach was inspirational and remains so. Marr saw the computational study of vision as tightly linked to psychophysics and neurophysiology, but the last twenty years have seen some weakening of that integration. Because feature detection is a key stage in early human vision, we have returned to basic questions about representation of edges at coarse and fine scales. We describe an explicit model in the spirit of the primal sketch, but tightly constrained by psychophysical data. Results from two tasks (location-marking and blur-matching) point strongly to the central role played by second-derivative operators, as proposed by Marr and Hildreth. Edge location and blur are evaluated by finding the location and scale of the Gaussian-derivative `template' that best matches the second-derivative profile (`signature') of the edge. The system is scale-invariant, and accurately predicts blur-matching data for a wide variety of 1-D and 2-D images. By finding the best-fitting scale, it implements a form of local scale selection and circumvents the knotty problem of integrating filter outputs across scales. [Supported by BBSRC and the Wellcome Trust]