973 resultados para Teorema Egregium de Gauss


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nel presente lavoro è affrontato lo studio delle curve ellittiche viste come curve algebriche piane, più precisamente come cubiche lisce nel piano proiettivo complesso. Dopo aver introdotto nella prima parte le nozioni di Superfici compatte e orientabili e curve algebriche, tramite il teorema di classificazione delle Superfici compatte, se ne fornisce una preliminare classificazione basata sul genere della superficie e della curva, rispettivamente. Da qui, segue la definizione di curve ellittiche e uno studio più dettagliato delle loro pricipali proprietà, quali la possibilità di definirle tramite un'equazione affine nota come equazione di Weierstrass e la loro struttura intrinseca di gruppo abeliano. Si fornisce quindi un'ulteriore classificazione delle cubiche lisce, totalmente differente da quella precedente, che si basa invece sul modulo della cubica, invariante per trasformazioni proiettive. Infine, si considera un aspetto computazionale delle curve ellittiche, ovvero la loro applicazione nel campo della Crittografia. Grazie alla struttura che esse assumono sui campi finiti, sotto opportune ipotesi, i crittosistemi a chiave pubblica basati sul problema del logaritmo discreto definiti sulle curve ellittiche, a parità di sicurezza rispetto ai crittosistemi classici, permettono l'utilizzo di chiavi più corte, e quindi meno costose computazionalmente. Si forniscono quindi le definizioni di problema del logaritmo discreto classico e sulle curve ellittiche, ed alcuni esempi di algoritmi crittografici classici definiti su quest'ultime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis addresses the Batch Reinforcement Learning methods in Robotics. This sub-class of Reinforcement Learning has shown promising results and has been the focus of recent research. Three contributions are proposed that aim to extend the state-of-art methods allowing for a faster and more stable learning process, such as required for learning in Robotics. The Q-learning update-rule is widely applied, since it allows to learn without the presence of a model of the environment. However, this update-rule is transition-based and does not take advantage of the underlying episodic structure of collected batch of interactions. The Q-Batch update-rule is proposed in this thesis, to process experiencies along the trajectories collected in the interaction phase. This allows a faster propagation of obtained rewards and penalties, resulting in faster and more robust learning. Non-parametric function approximations are explored, such as Gaussian Processes. This type of approximators allows to encode prior knowledge about the latent function, in the form of kernels, providing a higher level of exibility and accuracy. The application of Gaussian Processes in Batch Reinforcement Learning presented a higher performance in learning tasks than other function approximations used in the literature. Lastly, in order to extract more information from the experiences collected by the agent, model-learning techniques are incorporated to learn the system dynamics. In this way, it is possible to augment the set of collected experiences with experiences generated through planning using the learned models. Experiments were carried out mainly in simulation, with some tests carried out in a physical robotic platform. The obtained results show that the proposed approaches are able to outperform the classical Fitted Q Iteration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency-and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La méthodologie proposée vise à estimer les sensibilités des revenus de trois caisses d'épargne et de crédit par rapport aux modifications de taux d'intérêt. Plusieurs études basées sur le MEDAF permettent d'analyser la relation rendement risque. Notre méthodologie se démarque par l'inclusion d'un facteur taux d'intérêt comme variable explicative et par la segmentation des activités à l'intérieur d'une même caisse d'épargne et de crédit. Notre démarche se détaille comme suit: présentation du cadre conceptuel, laquelle comporte une brève revue de la littérature théorique et empirique et résume l'évolution du sujet; description de la procédure utilisée, de l'échantillon et de l'estimation; l'analyse et l'interprétation des résultats obtenus pour trois caisses. Un cahier des procédures expliquant le programme développé pour l'analyse (sur GAUSS) est présenté en annexe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este artículo exploramos una estrategia de desarrollo profesional de profesores de enseñanza básica que tienen a su cargo clases de matemáticas, basada en la profundización de conceptos teóricos y su utilización. El propósito fue identificar elementos del quehacer docente cuando los profesores en ejercicio reflexionan sobre la reproducibilidad de situaciones de enseñanza y aprendizaje, diseñadas por ellos y aplicadas en distintos escenarios. Estos escenarios son distintos grupos de alumnos entre 12 a 13 años en diferentes escuelas. El estudio toma como referente teórico la Teoría de Situaciones Didácticas, que da el soporte al constructo de reproducibilidad, la Teoría Antropológica de lo Didáctico y la conceptualización de reflexión. Es un estudio de caso en donde se realiza un seguimiento a cuatro docentes que diseñan situaciones de enseñanza y aprendizaje sobre el teorema de Pitágoras, en el marco de un curso para su desarrollo profesional. Metodológicamente se empleó el constructo Estudio de Clases para realizar la reflexión sobre los diseños de clases y su reproducibilidad. Los resultados develan que los docentes evolucionan en su reflexión y discusión sobre su quehacer en el aula, por lo cual mejoran las situaciones de enseñanza y aprendizaje y su gestión de clases. Además, se detecta que los profesores fijan ciertos elementos para poder aplicar sus clases o lecciones en distintos escenarios, y así obtener resultados similares.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La hoja de cálculo constituye un potente entorno para la experimentación en clase de estadística, comparable al laboratorio en la de ciencias experimentales. Entre sus múltiples aplicaciones se encuentra la de proporcionar un medio para la comprobación experimental de resultados teóricos. Para ilustrarlo, proponemos un modelo para verificar el teorema de Stein relativo a la estimación óptima de un conjunto de k > 2 medias. El carácter paradójico de este resultado lo convierte en un ejemplo ideal para este tipo de simulaciones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[eus] Lan honen helburua analisi dimentsionala nola erabiltzen den modu sinplean azaltzea da. Lana hiru zatitan banatuta dago. Lehenengo atalean analisi dimentsionalaren oinarriak aurkezten dira,teorema garrantzitsuenak nola erabili eta problemak nola ebatzi deskribatuz. Bigarren zatian, jariakinen mekanika eta dinamikako problemetan askotan agertzen diren zenbaki adimentsional ezagunenak zerrendatzen dira, eta bakoitza zein kasutan den garrantzitsua azaltzen da. Bukatzeko, analisi dimentsionalaren aplikazio nagusia aztertzen da azken atalean: modelaketa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Teoría de la probabilidad, contiene definiciones y terminología de frecuente uso en esta parte de las matemáticas; también se exponen distintos métodos de solución y las reglas esenciales del análisis combinatorio que proporcionan, en muchas ocasiones, una vía más cómoda en la solución de problemas; además se enuncia el Teorema de Bayes y su adjunto, de la probabilidad total. Todos los temas son ilustrados con ejemplos y problemas resueltos; al final hay una serie de ejercicios propuestos que el lector debe intentar resolver. La colección lecciones de matemáticas, iniciativa del departamento de ciencias básicas de la universidad de Medellín, a través de su grupo de investigación SUMMA, incluye en cada número la exposición detallada de un tema matemático, tratado con mayor profundidad que en un curso regular. Las temáticas incluyen: algebra, trigonometría, calculo, estadística y probabilidades, algebra lineal, métodos lineales y numéricos, historia de las matemáticas, geometría, matemáticas puras y aplicadas, ecuaciones diferenciales y empleo de distintos softwares para la enseñanza de las matemáticas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El estudio de la teoría sobre de las cuádricas con Geometría Proyectiva, aplicando conceptos, definiciones, y teoremas fundamentales, los cuales nos llevan a comprender la importancia de su aplicación en las diferentes ramas de la matemática y sus representaciones gráficas. Es por ello que en este trabajo se trata de desarrollar temas que están enfocados a comprender las cuádricas con geometría proyectiva y su importancia. Se desarrollará la noción de proyección, donde se dan definiciones importantes sobre la proyección, así como una descripción de que sucede si se agregan los puntos ideales o puntos al infinito, y que estos sean los centros de proyección, además el enriquecimiento que aportan estos nuevos conceptos. Se desarrollarán los conceptos de coordenadas homogéneas, que es fundamental para la comprensión de los puntos ideales o puntos al infinito, que facilitarán el manejo algebraico en el estudio del espacio proyectivo, el cual también incluye puntos complejos, así como la representación del espacio en diferentes dimensiones, y cambio de estructura de coordenadas, subespacios, hiperplanos y dualidad. Los más importantes teoremas de la Geometría Euclidiana, desarrollado con la Geometría Proyectiva, que es el Teorema de Desargues, y algunos resultados importantes adicionales. También se hará una introducción a proyectividades, razón cruzada, y transformaciones lineales. Se refleja la riqueza que tienen las cuádricas aplicando los conceptos de la geometría proyectiva, así como sus diferentes representaciones. Es importante mencionar que en el pasado el ser humano se ha visto favorecido por tales representaciones, facilitando la comprensión de su entorno, aunque muchas veces no esté consciente de los aspectos matemáticos que están involucrados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste artigo faz-se uma análise das características distributivas do processo Kaldor-Pasinetti, assumindo-se que o setor governamental incorre em persistentes déficits que podem ser financiados através de diferentes instrumentos, como a emissão de títulos e de moeda. Através dessa abordagem é possível estudar como a atividade governamental afeta a distribuição de renda entre capitalistas e trabalhadores e assim obter generalizações do Teorema de Cambridge em que versões anteriores como as de Steedman (1972), Pasinetti (1989), Dalziel (1991) e Faria (2000) surgem como casos particulares. _________________________________________________________________________________ ABSTRACT

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given a 2manifold triangular mesh \(M \subset {\mathbb {R}}^3\), with border, a parameterization of \(M\) is a FACE or trimmed surface \(F=\{S,L_0,\ldots, L_m\}\) -- \(F\) is a connected subset or region of a parametric surface \(S\), bounded by a set of LOOPs \(L_0,\ldots ,L_m\) such that each \(L_i \subset S\) is a closed 1manifold having no intersection with the other \(L_j\) LOOPs -- The parametric surface \(S\) is a statistical fit of the mesh \(M\) -- \(L_0\) is the outermost LOOP bounding \(F\) and \(L_i\) is the LOOP of the ith hole in \(F\) (if any) -- The problem of parameterizing triangular meshes is relevant for reverse engineering, tool path planning, feature detection, redesign, etc -- Stateofart mesh procedures parameterize a rectangular mesh \(M\) -- To improve such procedures, we report here the implementation of an algorithm which parameterizes meshes \(M\) presenting holes and concavities -- We synthesize a parametric surface \(S \subset {\mathbb {R}}^3\) which approximates a superset of the mesh \(M\) -- Then, we compute a set of LOOPs trimming \(S\), and therefore completing the FACE \(F=\ {S,L_0,\ldots ,L_m\}\) -- Our algorithm gives satisfactory results for \(M\) having low Gaussian curvature (i.e., \(M\) being quasi-developable or developable) -- This assumption is a reasonable one, since \(M\) is the product of manifold segmentation preprocessing -- Our algorithm computes: (1) a manifold learning mapping \(\phi : M \rightarrow U \subset {\mathbb {R}}^2\), (2) an inverse mapping \(S: W \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^3\), with \ (W\) being a rectangular grid containing and surpassing \(U\) -- To compute \(\phi\) we test IsoMap, Laplacian Eigenmaps and Hessian local linear embedding (best results with HLLE) -- For the back mapping (NURBS) \(S\) the crucial step is to find a control polyhedron \(P\), which is an extrapolation of \(M\) -- We calculate \(P\) by extrapolating radial basis functions that interpolate points inside \(\phi (M)\) -- We successfully test our implementation with several datasets presenting concavities, holes, and are extremely nondevelopable -- Ongoing work is being devoted to manifold segmentation which facilitates mesh parameterization

Relevância:

10.00% 10.00%

Publicador:

Resumo:

René Descartes publicó en 1637 su famosa Géométrie, un tratado donde aplica el álgebra a la geometría y desarrolla un original sistema de álgebra simbólica. En el tercer libro de la Géométrie enuncia, sin demostración, su célebre regla de los signos de Descartes. Durante dos siglos, el mundo matemático intentó sin éxito una demostración general y satisfactoria a los estándares de la época. Finalmente, Carl Frederick Gauss la demostró de la manera más general en 1828 recurriendo a métodos algebraicos. En este artículo, presentamos el tratamiento que la regla de los signos tiene en los libros de texto de álgebra y proponemos una justificación original alternativa apoyada en la idea de predicción que, hasta donde sabemos, no ha sido reportada en la literatura especializada.