990 resultados para Empirical Functions
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Resumo:
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.
Resumo:
Impressive claims have been made for the performance of the SNoW algorithm on face detection tasks by Yang et. al. [7]. In particular, by looking at both their results and those of Heisele et. al. [3], one could infer that the SNoW system performed substantially better than an SVM-based system, even when the SVM used a polynomial kernel and the SNoW system used a particularly simplistic 'primitive' linear representation. We evaluated the two approaches in a controlled experiment, looking directly at performance on a simple, fixed-sized test set, isolating out 'infrastructure' issues related to detecting faces at various scales in large images. We found that SNoW performed about as well as linear SVMs, and substantially worse than polynomial SVMs.
Resumo:
We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
Competence development is considered a preventive strategy of burnout. At an organizational context some competences could be linked as precursors or consequences. In self-assessment of competence development, students perceive stress tolerance as a priority competence to ameliorate. Moreover employers and recruitment consultants agree that this is a new authentic challenge for organizations. The main reasons of this result are debated, this study should consider the importance of competence development from a holistic point of view. In addition it considers the exploration of the relationship between stress tolerance and competence development, according to Conservation Resources (COR) theory (Hobfoll 1988, 1989, 1998, 2004) where the resource loss is considered the principal component in the stress process
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Exam questions and solutions in LaTex
Resumo:
Exam questions and solutions in PDF
Resumo:
Exercises and solutions about vector functions and curves.
Resumo:
id 34 additional quiz resource
Resumo:
Cuando las empresas evalúan los resultados de su planeación estratégica y de mercadeo se enfrentan en numerosas ocasiones al escepticismo sobre cómo dicha planeación favoreció o afectó la percepción hacia la empresa y sus productos. Este proyecto propone por medio del uso de una herramienta de simulación computacional reducir el factor de incertidumbre de LG Electronics en el valor percibido de marca por la población de Bogotá D.C. en cada una de sus líneas de producto; el grado de inversión en mercadeo, publicidad, distribución y servicio. Para ello los consumidores son modelados como agentes inteligentes con poder de recomendación, quienes se basan principalmente en la experiencia generada por el producto y en el grado de influencia de las estrategias de mercadeo que afectan su decisión de compra, de preferencia y de permanencia. Adicionalmente se mide la retribución en utilidades y en recordación de marca de las inversiones en mercadeo que la compañía realiza.
Resumo:
Esta investigación toma como marco general la Política de Reintegración Social y económica de personas y grupos alzados en armas en Colombia, en donde tras el estudio de las trayectorias en el conflicto de un grupo de 9 excombatientes, se aborda la relación existente entre los beneficios otorgados por dicha política y aquello que facilitó y motivó el ingreso, la permanencia y desmovilización de los grupos armados. Se presenta una caracterización e interpretación conceptual de las denominadas trayectorias en el conflicto, son establecidas relaciones y diferencias entre las organizaciones ilegales FARC y las AUC, se revisan las percepciones que frente a los beneficios del programa de reintegración tienen excombatientes y profesionales de la entidad que lidera dicho proceso y a partir de ello, es argumentada la incidencia que sobre el éxito de esta política tienen las características individuales y particulares, tanto de los excombatientes como de las organizaciones armadas ilegales.