899 resultados para Models performance
Resumo:
Analyzing, optimizing and designing flotation circuits using models and simulators have improved significantly over the last 15 years. Mineral flotation is now generally better understood through major advances in measuring and modeling the sub-processes within the flotation system. In addition, new and better methods have been derived to represent the floatability of particles as they move around a flotation circuit. A simulator has been developed that combines the effects of all of these sub-processes to predict the metallurgical performance of a flotation circuit. This paper presents an overview of the simulator, JKSimFloat V6.1PLUS, and its use in improving the industrial flotation plant performance. The application of the simulator at various operations is discussed with particular emphasis on the use of JKSimFloat V6.1PLUS in improving the flotation circuit performance.
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
Around the world, consumers and retailers of fresh produce are becoming more and more discerning about factors such as food safety and traceability, health, convenience and the sustainability of production systems, and in doing so they are changing the way in which fresh produce supply chains are configured and managed. When consumers demand fresh, safe, convenient, value-for-money produce, retailers in an increasingly competitive environment are attracted to those business models most capable of meeting these demands profitably. Traditional models are proving less and less able to deliver competitive advantage in such an environment. As a result, opportunistic, adversarial, price-based approaches to doing business between chain members are being replaced by approaches that are more strategic, collaborative and value-based. The shaping force behind this change is the need for producers, wholesalers, category managers, retailers and consumers to have more certainty about the performance of the supply chains upon which they rely. Certainty is generated through the supply chain's ability to create, deliver and share value. How to build supply chains that create, deliver and share value is arguably the single biggest challenge to the competitiveness of fresh produce firms, and therefore to the industries to which they belong.
Resumo:
There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
Deformable models are an attractive approach to recognizing objects which have considerable within-class variability such as handwritten characters. However, there are severe search problems associated with fitting the models to data which could be reduced if a better starting point for the search were available. We show that by training a neural network to predict how a deformable model should be instantiated from an input image, such improved starting points can be obtained. This method has been implemented for a system that recognizes handwritten digits using deformable models, and the results show that the search time can be significantly reduced without compromising recognition performance. © 1997 Academic Press.
Resumo:
This paper presents a forecasting technique for forward energy prices, one day ahead. This technique combines a wavelet transform and forecasting models such as multi- layer perceptron, linear regression or GARCH. These techniques are applied to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is improved significantly by using the wavelet transform. The methodology can be also applied to forecasting market clearing prices and electricity/gas loads.
Resumo:
The deficiencies of stationary models applied to financial time series are well documented. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We use a dynamic switching (modelled by a hidden Markov model) combined with a linear dynamical system in a hybrid switching state space model (SSSM) and discuss the practical details of training such models with a variational EM algorithm due to [Ghahramani and Hilton,1998]. The performance of the SSSM is evaluated on several financial data sets and it is shown to improve on a number of existing benchmark methods.
Resumo:
Current methods for retrieving near surface winds from scatterometer observations over the ocean surface require a foward sensor model which maps the wind vector to the measured backscatter. This paper develops a hybrid neural network forward model, which retains the physical understanding embodied in ¸mod, but incorporates greater flexibility, allowing a better fit to the observations. By introducing a separate model for the mid-beam and using a common model for the fore- and aft-beams, we show a significant improvement in local wind vector retrieval. The hybrid model also fits the scatterometer observations more closely. The model is trained in a Bayesian framework, accounting for the noise on the wind vector inputs. We show that adding more high wind speed observations in the training set improves wind vector retrieval at high wind speeds without compromising performance at medium or low wind speeds.
Resumo:
In the analysis and prediction of many real-world time series, the assumption of stationarity is not valid. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We introduce a new model which combines a dynamic switching (controlled by a hidden Markov model) and a non-linear dynamical system. We show how to train this hybrid model in a maximum likelihood approach and evaluate its performance on both synthetic and financial data.
Resumo:
We employ the methods of statistical physics to study the performance of Gallager type error-correcting codes. In this approach, the transmitted codeword comprises Boolean sums of the original message bits selected by two randomly-constructed sparse matrices. We show that a broad range of these codes potentially saturate Shannon's bound but are limited due to the decoding dynamics used. Other codes show sub-optimal performance but are not restricted by the decoding dynamics. We show how these codes may also be employed as a practical public-key cryptosystem and are of competitive performance to modern cyptographical methods.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.
Resumo:
In this paper, the exchange rate forecasting performance of neural network models are evaluated against the random walk, autoregressive moving average and generalised autoregressive conditional heteroskedasticity models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore, the parameters are chosen according to what the researcher considers to be the best. Such an approach, however,implies that the risk of making bad decisions is extremely high, which could explain why in many studies, neural network models do not consistently perform better than their time series counterparts. In this paper, through extensive experimentation, the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of Forecasting exchange rates with linear and nonlinear models 415 performing well. The results show that in general, neural network models perform better than the traditionally used time series models in forecasting exchange rates.
Resumo:
Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.
Resumo:
This paper introduces a compact form for the maximum value of the non-Archimedean in Data Envelopment Analysis (DEA) models applied for the technology selection, without the need to solve a linear programming (LP). Using this method the computational performance the common weight multi-criteria decision-making (MCDM) DEA model proposed by Karsak and Ahiska (International Journal of Production Research, 2005, 43(8), 1537-1554) is improved. This improvement is significant when computational issues and complexity analysis are a concern.
Resumo:
Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.