866 resultados para Estimating functions
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
This paper describes JERIM-320, a new 320-bit hash function used for ensuring message integrity and details a comparison with popular hash functions of similar design. JERIM-320 and FORK -256 operate on four parallel lines of message processing while RIPEMD-320 operates on two parallel lines. Popular hash functions like MD5 and SHA-1 use serial successive iteration for designing compression functions and hence are less secure. The parallel branches help JERIM-320 to achieve higher level of security using multiple iterations and processing on the message blocks. The focus of this work is to prove the ability of JERIM 320 in ensuring the integrity of messages to a higher degree to suit the fast growing internet applications
Resumo:
The Bieberbach conjecture about the coefficients of univalent functions of the unit disk was formulated by Ludwig Bieberbach in 1916 [Bieberbach1916]. The conjecture states that the coefficients of univalent functions are majorized by those of the Koebe function which maps the unit disk onto a radially slit plane. The Bieberbach conjecture was quite a difficult problem, and it was surprisingly proved by Louis de Branges in 1984 [deBranges1985] when some experts were rather trying to disprove it. It turned out that an inequality of Askey and Gasper [AskeyGasper1976] about certain hypergeometric functions played a crucial role in de Branges' proof. In this article I describe the historical development of the conjecture and the main ideas that led to the proof. The proof of Lenard Weinstein (1991) [Weinstein1991] follows, and it is shown how the two proofs are interrelated. Both proofs depend on polynomial systems that are directly related with the Koebe function. At this point algorithms of computer algebra come into the play, and computer demonstrations are given that show how important parts of the proofs can be automated.
Resumo:
Student’s t-distribution has found various applications in mathematical statistics. One of the main properties of the t-distribution is to converge to the normal distribution as the number of samples tends to infinity. In this paper, by using a Cauchy integral we introduce a generalization of the t-distribution function with four free parameters and show that it converges to the normal distribution again. We provide a comprehensive treatment of mathematical properties of this new distribution. Moreover, since the Fisher F-distribution has a close relationship with the t-distribution, we also introduce a generalization of the F-distribution and prove that it converges to the chi-square distribution as the number of samples tends to infinity. Finally some particular sub-cases of these distributions are considered.
Resumo:
In dieser Dissertation präsentieren wir zunächst eine Verallgemeinerung der üblichen Sturm-Liouville-Probleme mit symmetrischen Lösungen und erklären eine umfassendere Klasse. Dann führen wir einige neue Klassen orthogonaler Polynome und spezieller Funktionen ein, welche sich aus dieser symmetrischen Verallgemeinerung ableiten lassen. Als eine spezielle Konsequenz dieser Verallgemeinerung führen wir ein Polynomsystem mit vier freien Parametern ein und zeigen, dass in diesem System fast alle klassischen symmetrischen orthogonalen Polynome wie die Legendrepolynome, die Chebyshevpolynome erster und zweiter Art, die Gegenbauerpolynome, die verallgemeinerten Gegenbauerpolynome, die Hermitepolynome, die verallgemeinerten Hermitepolynome und zwei weitere neue endliche Systeme orthogonaler Polynome enthalten sind. All diese Polynome können direkt durch das neu eingeführte System ausgedrückt werden. Ferner bestimmen wir alle Standardeigenschaften des neuen Systems, insbesondere eine explizite Darstellung, eine Differentialgleichung zweiter Ordnung, eine generische Orthogonalitätsbeziehung sowie eine generische Dreitermrekursion. Außerdem benutzen wir diese Erweiterung, um die assoziierten Legendrefunktionen, welche viele Anwendungen in Physik und Ingenieurwissenschaften haben, zu verallgemeinern, und wir zeigen, dass diese Verallgemeinerung Orthogonalitätseigenschaft und -intervall erhält. In einem weiteren Kapitel der Dissertation studieren wir detailliert die Standardeigenschaften endlicher orthogonaler Polynomsysteme, welche sich aus der üblichen Sturm-Liouville-Theorie ergeben und wir zeigen, dass sie orthogonal bezüglich der Fisherschen F-Verteilung, der inversen Gammaverteilung und der verallgemeinerten t-Verteilung sind. Im nächsten Abschnitt der Dissertation betrachten wir eine vierparametrige Verallgemeinerung der Studentschen t-Verteilung. Wir zeigen, dass diese Verteilung gegen die Normalverteilung konvergiert, wenn die Anzahl der Stichprobe gegen Unendlich strebt. Eine ähnliche Verallgemeinerung der Fisherschen F-Verteilung konvergiert gegen die chi-Quadrat-Verteilung. Ferner führen wir im letzten Abschnitt der Dissertation einige neue Folgen spezieller Funktionen ein, welche Anwendungen bei der Lösung in Kugelkoordinaten der klassischen Potentialgleichung, der Wärmeleitungsgleichung und der Wellengleichung haben. Schließlich erklären wir zwei neue Klassen rationaler orthogonaler hypergeometrischer Funktionen, und wir zeigen unter Benutzung der Fouriertransformation und der Parsevalschen Gleichung, dass es sich um endliche Orthogonalsysteme mit Gewichtsfunktionen vom Gammatyp handelt.
Resumo:
In this paper, we solve the duplication problem P_n(ax) = sum_{m=0}^{n}C_m(n,a)P_m(x) where {P_n}_{n>=0} belongs to a wide class of polynomials, including the classical orthogonal polynomials (Hermite, Laguerre, Jacobi) as well as the classical discrete orthogonal polynomials (Charlier, Meixner, Krawtchouk) for the specific case a = −1. We give closed-form expressions as well as recurrence relations satisfied by the duplication coefficients.
Resumo:
In a similar manner as in some previous papers, where explicit algorithms for finding the differential equations satisfied by holonomic functions were given, in this paper we deal with the space of the q-holonomic functions which are the solutions of linear q-differential equations with polynomial coefficients. The sum, product and the composition with power functions of q-holonomic functions are also q-holonomic and the resulting q-differential equations can be computed algorithmically.
Resumo:
The basic thermodynamic functions, the entropy, free energy, and enthalpy, for element 105 (hahnium) in electronic configurations d^3 s^2, d^3 sp, and d^4s^1 and for its +5 ionized state (5f^14) have been calculated as a function of temperature. The data are based on the results of the calculations of the corresponding electronic states of element 105 using the multiconfiguration Dirac-Fock method.
Resumo:
Evapotranspiration (ET) is a complex process in the hydrological cycle that influences the quantity of runoff and thus the irrigation water requirements. Numerous methods have been developed to estimate potential evapotranspiration (PET). Unfortunately, most of the reliable PET methods are parameter rich models and therefore, not feasible for application in data scarce regions. On the other hand, accuracy and reliability of simple PET models vary widely according to regional climate conditions. The objective of the present study was to evaluate the performance of three temperature-based and three radiation-based simple ET methods in estimating historical ET and projecting future ET at Muda Irrigation Scheme at Kedah, Malaysia. The performance was measured by comparing those methods with the parameter intensive Penman-Monteith Method. It was found that radiation based methods gave better performance compared to temperature-based methods in estimation of ET in the study area. Future ET simulated from projected climate data obtained through statistical downscaling technique also showed that radiation-based methods can project closer ET values to that projected by Penman-Monteith Method. It is expected that the study will guide in selecting suitable methods for estimating and projecting ET in accordance to availability of meteorological data.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.
Resumo:
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.
Resumo:
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EMand the Minimum Spanning Tree algorithm to find the ML and MAP mixtureof trees for a variety of priors, including the Dirichlet and the MDL priors.
Resumo:
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. We also show that the single tree classifier acts like an implicit feature selector, thus making the classification performance insensitive to irrelevant attributes. Experimental results demonstrate the excellent performance of the new model both in density estimation and in classification.
Resumo:
We propose a nonparametric method for estimating derivative financial asset pricing formulae using learning networks. To demonstrate feasibility, we first simulate Black-Scholes option prices and show that learning networks can recover the Black-Scholes formula from a two-year training set of daily options prices, and that the resulting network formula can be used successfully to both price and delta-hedge options out-of-sample. For comparison, we estimate models using four popular methods: ordinary least squares, radial basis functions, multilayer perceptrons, and projection pursuit. To illustrate practical relevance, we also apply our approach to S&P 500 futures options data from 1987 to 1991.
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities