17 resultados para kernel regression
em Bulgarian Digital Mathematics Library at IMI-BAS
Resumo:
General Regression Neuro-Fuzzy Network, which combines the properties of conventional General Regression Neural Network and Adaptive Network-based Fuzzy Inference System is proposed in this work. This network relates to so-called “memory-based networks”, which is adjusted by one-pass learning algorithm.
Resumo:
Let H be a real Hilbert space and T be a maximal monotone operator on H. A well-known algorithm, developed by R. T. Rockafellar [16], for solving the problem (P) ”To find x ∈ H such that 0 ∈ T x” is the proximal point algorithm. Several generalizations have been considered by several authors: introduction of a perturbation, introduction of a variable metric in the perturbed algorithm, introduction of a pseudo-metric in place of the classical regularization, . . . We summarize some of these extensions by taking simultaneously into account a pseudo-metric as regularization and a perturbation in an inexact version of the algorithm.
Resumo:
The task of approximation-forecasting for a function, represented by empirical data was investigated. Certain class of the functions as forecasting tools: so called RFT-transformers, – was proposed. Least Square Method and superposition are the principal composing means for the function generating. Besides, the special classes of beam dynamics with delay were introduced and investigated to get classical results regarding gradients. These results were applied to optimize the RFT-transformers. The effectiveness of the forecast was demonstrated on the empirical data from the Forex market.
Resumo:
A Quantified Autoepistemic Logic is axiomatized in a monotonic Modal Quantificational Logic whose modal laws are slightly stronger than S5. This Quantified Autoepistemic Logic obeys all the laws of First Order Logic and its L predicate obeys the laws of S5 Modal Logic in every fixed-point. It is proven that this Logic has a kernel not containing L such that L holds for a sentence if and only if that sentence is in the kernel. This result is important because it shows that L is superfluous thereby allowing the ori ginal equivalence to be simplified by eliminating L from it. It is also shown that the Kernel of Quantified Autoepistemic Logic is a generalization of Quantified Reflective Logic, which coincides with it in the propositional case.
Resumo:
Mathematics Subject Classification: 44A40, 45B05
Resumo:
Mathematics Subject Classification: Primary 30C40
Resumo:
2002 Mathematics Subject Classification: 62J05, 62G35.
Resumo:
2002 Mathematics Subject Classification: 62M20, 62-07, 62J05, 62P20.
Resumo:
2000 Mathematics Subject Classification: 62J12, 62K15, 91B42, 62H99.
Resumo:
2000 Mathematics Subject Classification: 62J12, 62P10.
Resumo:
2000 Mathematics Subject Classification: 62F10, 62J05, 62P30
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
2000 Mathematics Subject Classification: Primary 47B47, 47B10; Secondary 47A30.
Resumo:
2010 Mathematics Subject Classification: 68T50,62H30,62J05.
Resumo:
2010 Mathematics Subject Classification: 62P10.