959 resultados para Orthogonal projections
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
[EN]This paper deals with the orthogonal projection (in the Frobenius sense) AN of the identity matrix I onto the matrix subspace AS (A ? Rn×n, S being an arbitrary subspace of Rn×n). Lower and upper bounds on the normalized Frobenius condition number of matrix AN are given. Furthermore, for every matrix subspace S ? Rn×n, a new index bF (A, S), which generalizes the normalized Frobenius condition number of matrix A, is defined and analyzed...
Resumo:
[EN]We analyze the best approximation
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We study orthogonal projections of generic embedded hypersurfaces in 'R POT.4' with boundary to 2-spaces. Therefore, we classify simple map germs from 'R POT.3' to the plane of codimension less than or equal to 4 with the source containing a distinguished plane which is preserved by coordinate changes. We also go into some detail on their geometrical properties in order to recognize the cases of codimension less than or equal to 1.
Resumo:
We apply the theory of Peres and Schlag to obtain generic lower bounds for Hausdorff dimension of images of sets by orthogonal projections on simply connected two-dimensional Riemannian manifolds of constant curvature. As a conclusion we obtain appropriate versions of Marstrand's theorem, Kaufman's theorem, and Falconer's theorem in the above geometrical settings.
Resumo:
A method is proposed to characterize contraction of a set through orthogonal projections. For discrete-time multi-agent systems, quantitative estimates of convergence (to a consensus) rate are provided by means of contracting convex sets. Required convexity for the sets that should include the values that the transition maps of agents take is considered in a more general sense than that of Euclidean geometry. © 2007 IEEE.
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table has n rows and m columns and all probabilities are non-null. This kind of table can be seen as an element in the simplex of n · m parts. In this context, the marginals are identified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclidean elements of the Aitchison geometry of the simplex can also be translated into the table of probabilities: subspaces, orthogonal projections, distances. Two important questions are addressed: a) given a table of probabilities, which is the nearest independent table to the initial one? b) which is the largest orthogonal projection of a row onto a column? or, equivalently, which is the information in a row explained by a column, thus explaining the interaction? To answer these questions three orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independent two-way tables and fully dependent tables representing row-column interaction. An important result is that the nearest independent table is the product of the two (row and column)-wise geometric marginal tables. A corollary is that, in an independent table, the geometric marginals conform with the traditional (arithmetic) marginals. These decompositions can be compared with standard log-linear models. Key words: balance, compositional data, simplex, Aitchison geometry, composition, orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure, contingency table
Resumo:
Two algorithms for finding the point on non-rational/rational Bezier curves of which the normal vector passes through a given external point are presented. The algorithms are based on Bezier curves generation algorithms of de Casteljau's algorithm for non-rational Bezier curve or Farin's recursion for rational Bezier curve, respectively. Orthogonal projections from the external point are used to guide the directional search used in the proposed iterative algorithms. Using Lyapunov's method, it is shown that each algorithm is able to converge to a local minimum for each case of non-rational/rational Bezier curves. It is also shown that on convergence the distance between the point on curves to the external point reaches a local minimum for both approaches. Illustrative examples are included to demonstrate the effectiveness of the proposed approaches.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
The problem treated in this dissertation is to establish boundedness for the iterates of an iterative algorithm
in
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In der vorliegenden Arbeit wird die Variation abgeschlossener Unterräume eines Hilbertraumes untersucht, die mit isolierten Komponenten der Spektren von selbstadjungierten Operatoren unter beschränkten additiven Störungen assoziiert sind. Von besonderem Interesse ist hierbei die am wenigsten restriktive Bedingung an die Norm der Störung, die sicherstellt, dass die Differenz der zugehörigen orthogonalen Projektionen eine strikte Normkontraktion darstellt. Es wird ein Überblick über die bisher erzielten Resultate gegeben. Basierend auf einem Iterationsansatz wird eine allgemeine Schranke an die Variation der Unterräume für Störungen erzielt, die glatt von einem reellen Parameter abhängen. Durch Einführung eines Kopplungsparameters wird das Ergebnis auf den Fall additiver Störungen angewendet. Auf diese Weise werden zuvor bekannte Ergebnisse verbessert. Im Falle von additiven Störungen werden die Schranken an die Variation der Unterräume durch ein Optimierungsverfahren für die Stützstellen im Iterationsansatz weiter verschärft. Die zugehörigen Ergebnisse sind die besten, die bis zum jetzigen Zeitpunkt erzielt wurden.
Resumo:
This article presents a mathematical method for producing hard-chine ship hulls based on a set of numerical parameters that are directly related to the geometric features of the hull and uniquely define a hull form for this type of ship. The term planing hull is used generically to describe the majority of hard-chine boats being built today. This article is focused on unstepped, single-chine hulls. B-spline curves and surfaces were combined with constraints on the significant ship curves to produce the final hull design. The hard-chine hull geometry was modeled by decomposing the surface geometry into boundary curves, which were defined by design constraints or parameters. In planing hull design, these control curves are the center, chine, and sheer lines as well as their geometric features including position, slope, and, in the case of the chine, enclosed area and centroid. These geometric parameters have physical, hydrodynamic, and stability implications from the design point of view. The proposed method uses two-dimensional orthogonal projections of the control curves and then produces three-dimensional (3-D) definitions using B-spline fitting of the 3-D data points. The fitting considers maximum deviation from the curve to the data points and is based on an original selection of the parameterization. A net of B-spline curves (stations) is then created to match the previously defined 3-D boundaries. A final set of lofting surfaces of the previous B-spline curves produces the hull surface.
Resumo:
Orthogonal neighborhood-preserving projection (ONPP) is a recently developed orthogonal linear algorithm for overcoming the out-of-sample problem existing in the well-known manifold learning algorithm, i.e., locally linear embedding. It has been shown that ONPP is a strong analyzer of high-dimensional data. However, when applied to classification problems in a supervised setting, ONPP only focuses on the intraclass geometrical information while ignores the interaction of samples from different classes. To enhance the performance of ONPP in classification, a new algorithm termed discriminative ONPP (DONPP) is proposed in this paper. DONPP 1) takes into account both intraclass and interclass geometries; 2) considers the neighborhood information of interclass relationships; and 3) follows the orthogonality property of ONPP. Furthermore, DONPP is extended to the semisupervised case, i.e., semisupervised DONPP (SDONPP). This uses unlabeled samples to improve the classification accuracy of the original DONPP. Empirical studies demonstrate the effectiveness of both DONPP and SDONPP.