391 resultados para Riemannian-manifolds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

MSC 2010: 30C60

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 37K40, 35Q15, 35Q51, 37K15.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 35B50, 35L15.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 49J15, 49J30, 53B50.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 53C40, 53C25.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 53B05, 53B99.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are able to give a complete description of four-dimensional Lie algebras g which satisfy the tame-compatible question of Donaldson for all almost complex structures J on g are completely described. As a consequence, examples are given of (non-unimodular) four-dimensional Lie algebras with almost complex structures which are tamed but not compatible with symplectic forms.? Note that Donaldson asked his question for compact four-manifolds. In that context, the problem is still open, but it is believed that any tamed almost complex structure is in fact compatible with a symplectic form. In this presentation, I will define the basic objects involved and will give some insights on the proof. The key for the proof is translating the problem into a Linear Algebra setting. This is a joint work with Dr. Draghici.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This book constitutes the refereed proceedings of the 14th International Conference on Parallel Problem Solving from Nature, PPSN 2016, held in Edinburgh, UK, in September 2016. The total of 93 revised full papers were carefully reviewed and selected from 224 submissions. The meeting began with four workshops which offered an ideal opportunity to explore specific topics in intelligent transportation Workshop, landscape-aware heuristic search, natural computing in scheduling and timetabling, and advances in multi-modal optimization. PPSN XIV also included sixteen free tutorials to give us all the opportunity to learn about new aspects: gray box optimization in theory; theory of evolutionary computation; graph-based and cartesian genetic programming; theory of parallel evolutionary algorithms; promoting diversity in evolutionary optimization: why and how; evolutionary multi-objective optimization; intelligent systems for smart cities; advances on multi-modal optimization; evolutionary computation in cryptography; evolutionary robotics - a practical guide to experiment with real hardware; evolutionary algorithms and hyper-heuristics; a bridge between optimization over manifolds and evolutionary computation; implementing evolutionary algorithms in the cloud; the attainment function approach to performance evaluation in EMO; runtime analysis of evolutionary algorithms: basic introduction; meta-model assisted (evolutionary) optimization. The papers are organized in topical sections on adaption, self-adaption and parameter tuning; differential evolution and swarm intelligence; dynamic, uncertain and constrained environments; genetic programming; multi-objective, many-objective and multi-level optimization; parallel algorithms and hardware issues; real-word applications and modeling; theory; diversity and landscape analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For any Legendrian knot in R^3 with the standard contact structure, we show that the existence of an augmentation to any field of the Chekanov-Eliashberg differential graded algebra over Z[t,t^{-1}] is equivalent to the existence of a normal ruling of the front diagram, generalizing results of Fuchs, Ishkhanov, and Sabloff. We also show that any even graded augmentation must send t to -1.

We extend the definition of a normal ruling from J^1(S^1) given by Lavrov and Rutherford to a normal ruling for Legendrian links in #^k(S^1\times S^2). We then show that for Legendrian links in J^1(S^1) and #^k(S^1\times S^2), the existence of an augmentation to any field of the Chekanov-Eliashberg differential graded algebra over Z[t,t^{-1}] is equivalent to the existence of a normal ruling of the front diagram. For Legendrian knots, we also show that any even graded augmentation must send t to -1. We use the correspondence to give nonvanishing results for the symplectic homology of certain Weinstein 4-manifolds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Finsler space is said to be geodesically reversible if each oriented geodesic can be reparametrized as a geodesic with the reverse orientation. A reversible Finsler space is geodesically reversible, but the converse need not be true. In this note, building on recent work of LeBrun and Mason, it is shown that a geodesically reversible Finsler metric of constant flag curvature on the 2-sphere is necessarily projectively flat. As a corollary, using a previous result of the author, it is shown that a reversible Finsler metric of constant flag curvature on the 2-sphere is necessarily a Riemannian metric of constant Gauss curvature, thus settling a long- standing problem in Finsler geometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2016 Springer Science+Business Media DordrechtG2-Monopoles are solutions to gauge theoretical equations on G2-manifolds. If the G2-manifolds under consideration are compact, then any irreducible G2-monopole must have singularities. It is then important to understand which kind of singularities G2-monopoles can have. We give examples (in the noncompact case) of non-Abelian monopoles with Dirac type singularities, and examples of monopoles whose singularities are not of that type. We also give an existence result for Abelian monopoles with Dirac type singularities on compact manifolds. This should be one of the building blocks in a gluing construction aimed at constructing non-Abelian ones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Every closed, oriented, real analytic Riemannian 3-manifold can be isometrically embedded as a special Lagrangian submanifold of a Calabi-Yau 3-fold, even as the real locus of an antiholomorphic, isometric involution. Every closed, oriented, real analytic Riemannian 4-manifold whose bundle of self-dual 2-forms is trivial can be isometrically embedded as a coassociative submanifold in a G_2-manifold, even as the fixed locus of an anti-G_2 involution. These results, when coupled with McLean's analysis of the moduli spaces of such calibrated submanifolds, yield a plentiful supply of examples of compact calibrated submanifolds with nontrivial deformation spaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let $M$ be a compact, oriented, even dimensional Riemannian manifold and let $S$ be a Clifford bundle over $M$ with Dirac operator $D$. Then \[ \textsc{Atiyah Singer: } \quad \text{Ind } \mathsf{D}= \int_M \hat{\mathcal{A}}(TM)\wedge \text{ch}(\mathcal{V}) \] where $\mathcal{V} =\text{Hom}_{\mathbb{C}l(TM)}(\slashed{\mathsf{S}},S)$. We prove the above statement with the means of the heat kernel of the heat semigroup $e^{-tD^2}$. The first outstanding result is the McKean-Singer theorem that describes the index in terms of the supertrace of the heat kernel. The trace of heat kernel is obtained from local geometric information. Moreover, if we use the asymptotic expansion of the kernel we will see that in the computation of the index only one term matters. The Berezin formula tells us that the supertrace is nothing but the coefficient of the Clifford top part, and at the end, Getzler calculus enables us to find the integral of these top parts in terms of characteristic classes.