973 resultados para Commandino, Federico, 1509-1575.
Resumo:
Se presenta memoria final de proyecto educativo que promueve la organización del currículum de las diversas áreas de la ESO utilizando como eje integrador la historia de la mujer y su desarrollo mediante métodos de investigación y la realización de una representación teatral. Se realiza en el IES Costa del Sol en Torremolinos, Málaga. Los objetivos son: elevar el interés por el estudio de la historia en cuarto de ESO; hacer próximo el proceso de enseñanza-aprendizaje de la historia; motivar y llenar de ilusión el trabajo tanto del colectivo de profesores como de los alumnos a través de esta experiencia educativa; combatir actitudes de no respeto, de no entendimiento de la igualdad en una sociedad cargada de ejemplos violentos o injustos contra la mujer o simplemente el diferente; valorar la música como campo de cultura y de sensibilidad importantes; utilizar con la misma ilusión los medios tecnológicos a nuestro alcance; realizar una experiencia interdisciplinar que de sentido globalizador al proceso de enseñanza-aprendizaje; despertar el interés por el trabajo de investigación bien realizado, manejo de fuentes, técnica y hábitos de trabajo individual y en grupo; elevar el interés por el estudio y la autoestima individual y colectiva; educar para la integración social, el respeto, la tolerancia y la diversidad cultural. El proceso consta de varias fases: manejo de amplia bibliografía; jornadas de concienciación; trabajos de campo; trabajos de investigación sobre personajes femeninos; jornadas interculturales; elaboración de una página web; teatro.
Resumo:
Este es el tercero de los proyectos literarios que realizan los IES Poeta Julián Andújar (Santomera - Murcia) y de Manuel Tárraga Escribano (San Pedro del Pinatar - Murcia) coordinados por los profesores de los departamentos de lengua castellana y literatura y que han cubierto los géneros litararios narrativo, descriptivo, epistolar o periódistico y dramático, en esta ocasión de la mano de Lorca.
Resumo:
Tercera entrega de los cuadernos para educar en el tercer milenio dedicado al mundo de la educación y los valores a través de tres ejes temáticos: la escuela y la defunción de los valores cristianos, experiencias educativas de centros de la Fundación Educación Católica y documentos Vicencianos (San Vicente de Paúl) que se dedica a la figura de Federico Ozanam, profesor dedicado a la enseñanza universitaria. Los temas abordados en las distintas colaboraciones que componen el volumen son: patrística, escuela actual, postmodernidad, escuela y valores, refelxiones sobre educación, televisión y familia, la persona de Jesús, proyecto integrado en los ciclos formativos, documentos para el trabajo en valores, educación en valores en la escuela católica, investigación educativa y formación permanente, el misterio del hombre, tiempos de cambio, acto por la paz, consumo, la huerta, implicación de las familias y valores cristianos y salud.
Resumo:
Es un cuaderno de trabajo dirigido a los alumnos de Educaci??n Primaria con motivo de su visita al albergue El Valle. Este cuaderno de trabajo es utilizado como cuaderno de campo. Est?? dividido en nueve n??cleos de actividades entre las que se incluyen talleres y excursiones. El cuaderno incluye un anexo con la localizaci??n y comentario de espacios naturales de la regi??n. Entre los objetivos del cuaderno se encuentran: el descubrimiento del paisaje, el aprendizaje de la flora y la fauna, la historia y la influencia de las personas en estos parajes as?? como la conservaci??n del medio ambiente.
Resumo:
Se pretende evaluar el nivel de prevención detectado en los centros de Educación Primaria de la Región de Murcia en la concreción de las clases del área de educación primaria. La obra está estructurada en dos partes: un marco teórico y un estudio empírico, y acaba con sus correspondientes capítulos de conclusiones, bibliografía revisada y anexos (cuestionarios para alumnado y profesorado). La primera parte se inicia con un acopio de legislación sobre prevención de riesgos. Tras unas consideraciones iniciales sobre la necesidad de realizar una evaluación inicial de riesgos en los centros educativos y la importancia de la figura del Coordinador de Prevención como promotor de los Planes de Autoprotección Escolar, se aborda otra sección centrada en la accesibilidad y en las medidas de prevención de las instalaciones utilizadas por el alumnado como necesidades específicas de apoyo educativo o el profesorado con discapacidad
Resumo:
Documento presentado fundamentalmente como instrumento que permite facilitar una evaluaci??n amplia del alumno/a con necesidades educativas especiales y del contexto (educativo y familiar). Su utilizaci??n est?? dirigida especialmente a los profesores de Educaci??n Infantil y Primaria, as?? como al profesorado de apoyo y especialistas implicados en la atenci??n de dichas etapas. Consta de trece gu??as de informaci??n-reflexi??n: estilo de aprendizaje, contexto escolar, tiempo de ocio, h??bitos de autonom??a, etc., todo ello, puede ser utilizado en todo o en parte, en funci??n de la informaci??n que se desee obtener.
Resumo:
Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.
Resumo:
Global temperature variations between 1861 and 1984 are forecast usingsregularization networks, multilayer perceptrons and linearsautoregression. The regularization network, optimized by stochasticsgradient descent associated with colored noise, gives the bestsforecasts. For all the models, prediction errors noticeably increasesafter 1965. These results are consistent with the hypothesis that thesclimate dynamics is characterized by low-dimensional chaos and thatsthe it may have changed at some point after 1965, which is alsosconsistent with the recent idea of climate change.s
Resumo:
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
Resumo:
Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.
Resumo:
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
In this paper we consider the problem of approximating a function belonging to some funtion space Φ by a linear comination of n translates of a given function G. Ussing a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the erro is 0(1/n) in any number of dimensions. The apparent avoidance of the "curse of dimensionality" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev tpe, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L2 norm and in the Lc norm. The interesting feature of these results is that, thanks to the constructive nature of Jones" and Barron"s lemma, an iterative procedure is defined that can achieve this rate.