970 resultados para Neural estimation
Resumo:
This gaper demonstrates that artificial neural networks can be used effectively for estimation of parameters related to study of atmospheric conditions to high voltage substations design. Specifically, the neural networks are used to compute the variation of electrical field intensity and critical disruptive voltage in substations taking into account several atmospheric factors, such as pressure, temperature, humidity, so on. Examples of simulation of tests are presented to validate the proposed approach. The results that were obtained by experimental evidences and numerical simulations allowed the verification of the influence of the atmospheric conditions on design of substations concerning lightning.
Resumo:
The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel method using artificial neural networks to solve robust parameter estimation problems for nonlinear models with unknown-but-bounded errors and uncertainties. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.
Resumo:
A novel approach for solving robust parameter estimation problems is presented for processes with unknown-but-bounded errors and uncertainties. An artificial neural network is developed to calculate a membership set for model parameters. Techniques of fuzzy logic control lead the network to its equilibrium points. Simulated examples are presented as an illustration of the proposed technique. The result represent a significant improvement over previously proposed methods. (C) 1999 IMACS/Elsevier B.V. B.V. All rights reserved.
Resumo:
This paper considers the role of automatic estimation of crowd density and its importance for the automatic monitoring of areas where crowds are expected to be present. A new technique is proposed which is able to estimate densities ranging from very low to very high concentration of people, which is a difficult problem because in a crowd only parts of people's body appear. The new technique is based on the differences of texture patterns of the images of crowds. Images of low density crowds tend to present coarse textures, while images of dense crowds tend to present fine textures. The image pixels are classified in different texture classes and statistics of such classes are used to estimate the number of people. The texture classification and the estimation of people density are carried out by means of self organising neural networks. Results obtained respectively to the estimation of the number of people in a specific area of Liverpool Street Railway Station in London (UK) are presented. (C) 1998 Elsevier B.V. Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Human beings perceive images through their properties, like colour, shape, size, and texture. Texture is a fertile source of information about the physical environment. Images of low density crowds tend to present coarse textures, while images of dense crowds tend to present fine textures. This paper describes a new technique for automatic estimation of crowd density, which is a part of the problem of automatic crowd monitoring, using texture information based on grey-level transition probabilities on digitised images. Crowd density feature vectors are extracted from such images and used by a self organising neural network which is responsible for the crowd density estimation. Results obtained respectively to the estimation of the number of people in a specific area of Liverpool Street Railway Station in London (UK) are presented.
Resumo:
Localization is information of fundamental importance to carry out various tasks in the mobile robotic area. The exact degree of precision required in the localization depends on the nature of the task. The GPS provides global position estimation but is restricted to outdoor environments and has an inherent imprecision of a few meters. In indoor spaces, other sensors like lasers and cameras are commonly used for position estimation, but these require landmarks (or maps) in the environment and a fair amount of computation to process complex algorithms. These sensors also have a limited field of vision. Currently, Wireless Networks (WN) are widely available in indoor environments and can allow efficient global localization that requires relatively low computing resources. However, the inherent instability in the wireless signal prevents it from being used for very accurate position estimation. The growth in the number of Access Points (AP) increases the overlap signals areas and this could be a useful means of improving the precision of the localization. In this paper we evaluate the impact of the number of Access Points in mobile nodes localization using Artificial Neural Networks (ANN). We use three to eight APs as a source signal and show how the ANNs learn and generalize the data. Added to this, we evaluate the robustness of the ANNs and evaluate a heuristic to try to decrease the error in the localization. In order to validate our approach several ANNs topologies have been evaluated in experimental tests that were conducted with a mobile node in an indoor space.
Resumo:
Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^
Resumo:
Over the last ten years, Salamanca has been considered among the most polluted cities in México. This paper presents a Self-Organizing Maps (SOM) Neural Network application to classify pollution data and automatize the air pollution level determination for Sulphur Dioxide (SO2) in Salamanca. Meteorological parameters are well known to be important factors contributing to air quality estimation and prediction. In order to observe the behavior and clarify the influence of wind parameters on the SO2 concentrations a SOM Neural Network have been implemented along a year. The main advantages of the SOM is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. The results show a significative correlation between pollutant concentrations and some environmental variables.
Resumo:
One of the biggest challenges that software developers face is to make an accurate estimate of the project effort. Radial basis function neural networks have been used to software effort estimation in this work using NASA dataset. This paper evaluates and compares radial basis function versus a regression model. The results show that radial basis function neural network have obtained less Mean Square Error than the regression method.
Resumo:
The Maser thesis is devoted to developing a model to technical state of gas turbine engine estimation. The approaches to preparation data, especially to handle unbalanced data were presented in the thesis. In order to efficient estimation of model performance, the special metric was chosen. Goal of the master thesis is analyzing of monitoring parameters data and developing a model of technical state of GTE estimation based on the data.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
We present results concerning the application of the Good-Turing (GT) estimation method to the frequentist n-tuple system. We show that the Good-Turing method can, to a certain extent rectify the Zero Frequency Problem by providing, within a formal framework, improved estimates of small tallies. We also show that it leads to better tuple system performance than Maximum Likelihood estimation (MLE). However, preliminary experimental results suggest that replacing zero tallies with an arbitrary constant close to zero before MLE yields better performance than that of GT system.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms maximum likelihood.