992 resultados para Vector space,
Resumo:
[EN]A natural generalization of the classical Moore-Penrose inverse is presented. The so-called S-Moore-Penrose inverse of a m x n complex matrix A, denoted by As, is defined for any linear subspace S of the matrix vector space Cnxm. The S-Moore-Penrose inverse As is characterized using either the singular value decomposition or (for the nonsingular square case) the orthogonal complements with respect to the Frobenius inner product. These results are applied to the preconditioning of linear systems based on Frobenius norm minimization and to the linearly constrained linear least squares problem.
Resumo:
Untersucht werden in der vorliegenden Arbeit Versionen des Satzes von Michlin f¨r Pseudodiffe- u rentialoperatoren mit nicht-regul¨ren banachraumwertigen Symbolen und deren Anwendungen a auf die Erzeugung analytischer Halbgruppen von solchen Operatoren auf vektorwertigen Sobo- levr¨umen Wp (Rn
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no “natural“ mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its \emph{vocabulary}, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Resumo:
An introduction to Fourier Series based on the minimization of the least square error between an approximate series representation and the exact function.
Resumo:
Twitter lists organise Twitter users into multiple, often overlapping, sets. We believe that these lists capture some form of emergent semantics, which may be useful to characterise. In this paper we describe an approach for such characterisation, which consists of deriving semantic relations between lists and users by analyzing the cooccurrence of keywords in list names. We use the vector space model and Latent Dirichlet Allocation to obtain similar keywords according to co-occurrence patterns. These results are then compared to similarity measures relying on WordNet and to existing Linked Data sets. Results show that co-occurrence of keywords based on members of the lists produce more synonyms and more correlated results to that of WordNet similarity measures.
Resumo:
El flameo o flutter es un fenómeno vibratorio debido a la interacción de fuerzas inerciales, elásticas y aerodinámicas. Consiste en un intercambio de energía, que se puede observar en el cambio de amortiguamientos, entre dos o más modos estructurales, denominados modos críticos, cuyas frecuencias tienden a acercarse (coalescencia de frecuencias). Los ensayos en vuelo de flameo suponen un gran riesgo debido a la posibilidad de una perdida brusca de estabilidad aeroelástica (flameo explosivo) con la posibilidad de destrucción de la aeronave. Además existen otros fenómenos asociados que pueden aparecer como el LCO (Limit Cycle Oscillation) y la interacción con los mandos de vuelo. Debido a esto, se deben llevar a cabo análisis exhaustivos, que incluyen GVT (vibraciones en tierra), antes de comenzar los ensayos en vuelo, y estos últimos deben ser ejecutados con robustos procedimientos. El objetivo de los ensayos es delimitar la frontera de estabilidad sin llegar a ella, manteniéndose siempre dentro de la envolvente estable de vuelo. Para lograrlo se necesitan métodos de predicción, siendo el “Flutter Margin”, el más utilizado. Para saber cuánta estabilidad aeroelástica tiene el avión y lo lejos que está de la frontera de estabilidad (a través de métodos de predicción) los parámetros modales, en particular la frecuencia y el amortiguamiento, son de vital importancia. El ensayo en vuelo consiste en la excitación de la estructura a diferentes condiciones de vuelo, la medición de la respuesta y su análisis para obtener los dos parámetros mencionados. Un gran esfuerzo se dedica al análisis en tiempo real de las señales como un medio de reducir el riesgo de este tipo de ensayos. Existen numerosos métodos de Análisis Modal, pero pocos capaces de analizar las señales procedentes de los ensayos de flameo, debido a sus especiales características. Un método novedoso, basado en la Descomposición por Valores Singulares (SVD) y la factorización QR, ha sido desarrollado y aplicado al análisis de señales procedentes de vuelos de flameo del F-18. El método es capaz de identificar frecuencia y amortiguamiento de los modos críticos. El algoritmo se basa en la capacidad del SVD para el análisis, modelización y predicción de series de datos con características periódicas y en su capacidad de identificar el rango de una matriz, así como en la aptitud del QR para seleccionar la mejor base vectorial entre un conjunto de vectores para representar el campo vectorial que forman. El análisis de señales de flameo simuladas y reales demuestra, bajo ciertas condiciones, la efectividad, robustez, resistencia al ruido y capacidad de automatización del método propuesto. ABSTRACT Flutter involves the interaction between inertial, elastic and aerodynamic forces. It consists on an exchange of energy, identified by change in damping, between two or more structural modes, named critical modes, whose frequencies tend to get closer to each other (frequency coalescence). Flight flutter testing involves high risk because of the possibility of an abrupt lost in aeroelastic stability (hard flutter) that may lead to aircraft destruction. Moreover associated phenomena may happen during the flight as LCO (Limit Cycle Oscillation) and coupling with flight controls. Because of that, intensive analyses, including GVT (Ground Vibration Test), have to be performed before beginning the flights test and during them consistent procedures have to be followed. The test objective is to identify the stability border, maintaining the aircraft always inside the stable domain. To achieve that flutter speed prediction methods have to be used, the most employed being the “Flutter Margin”. In order to know how much aeroelastic stability remains and how far the aircraft is from the stability border (using the prediction methods), modal parameters, in particular frequency and damping are paramount. So flight test consists in exciting the structure at various flight conditions, measuring the response and identifying in real-time these two parameters. A great deal of effort is being devoted to real-time flight data analysis as an effective way to reduce the risk. Numerous Modal Analysis algorithms are available, but very few are suitable to analyze signals coming from flutter testing due to their special features. A new method, based on Singular Value Decomposition (SVD) and QR factorization, has been developed and applied to the analysis of F-18 flutter flight-test data. The method is capable of identifying the frequency and damping of the critical aircraft modes. The algorithm relies on the capability of SVD for the analysis, modelling and prediction of data series with periodic features and also on its power to identify matrix rank as well as QR competence for selecting the best basis among a set of vectors in order to represent a given vector space of such a set. The analysis of simulated and real flutter flight test data demonstrates, under specific conditions, the effectiveness, robustness, noise-immunity and the capability for automation of the method proposed.
Resumo:
El objetivo del presente trabajo de investigación es explorar nuevas técnicas de implementación, basadas en grafos, para las Redes de Neuronas, con el fin de simplificar y optimizar las arquitecturas y la complejidad computacional de las mismas. Hemos centrado nuestra atención en una clase de Red de Neuronas: las Redes de Neuronas Recursivas (RNR), también conocidas como redes de Hopfield. El problema de obtener la matriz sináptica asociada con una RNR imponiendo un determinado número de vectores como puntos fijos, no está en absoluto resuelto, el número de vectores prototipo que pueden ser almacenados en la red, cuando se utiliza la ley de Hebb, es bastante limitado, la red se satura rápidamente cuando se pretende almacenar nuevos prototipos. La ley de Hebb necesita, por tanto, ser revisada. Algunas aproximaciones dirigidas a solventar dicho problema, han sido ya desarrolladas. Nosotros hemos desarrollado una nueva aproximación en la forma de implementar una RNR en orden a solucionar estos problemas. La matriz sináptica es obtenida mediante la superposición de las componentes de los vectores prototipo, sobre los vértices de un Grafo, lo cual puede ser también interpretado como una coloración de dicho grafo. Cuando el periodo de entrenamiento se termina, la matriz de adyacencia del Grafo Resultante o matriz de pesos, presenta ciertas propiedades por las cuales dichas matrices serán llamadas tetraédricas. La energía asociada a cualquier estado de la red es representado por un punto (a,b) de R2. Cada uno de los puntos de energía asociados a estados que disten lo mismo del vector cero está localizado sobre la misma línea de energía de R2. El espacio de vectores de estado puede, por tanto, clasificarse en n clases correspondientes a cada una de las n diferentes distancias que puede tener cualquier vector al vector cero. La matriz (n x n) de pesos puede reducirse a un n-vector; de esta forma, tanto el tiempo de computación como el espacio de memoria requerido par almacenar los pesos, son simplificados y optimizados. En la etapa de recuperación, es introducido un vector de parámetros R2, éste es utilizado para controlar la capacidad de la red: probaremos que lo mayor es la componente a¡, lo menor es el número de puntos fijos pertenecientes a la línea de energía R¡. Una vez que la capacidad de la red ha sido controlada mediante este parámetro, introducimos otro parámetro, definido como la desviación del vector de pesos relativos, este parámetro sirve para disminuir ostensiblemente el número de parásitos. A lo largo de todo el trabajo, hemos ido desarrollando un ejemplo, el cual nos ha servido para ir corroborando los resultados teóricos, los algoritmos están escritos en un pseudocódigo, aunque a su vez han sido implamentados utilizando el paquete Mathematica 2.2., mostrándolos en un volumen suplementario al texto.---ABSTRACT---The aim of the present research is intended to explore new specifícation techniques of Neural Networks based on Graphs to be used in the optimization and simplification of Network Architectures and Computational Complexhy. We have focused our attention in a, well known, class of Neural Networks: the Recursive Neural Networks, also known as Hopfield's Neural Networks. The general problem of constructing the synaptic matrix associated with a Recursive Neural Network imposing some vectors as fixed points is fer for completery solved, the number of prototype vectors (learning patterns) which can be stored by Hebb's law is rather limited and the memory will thus quickly reach saturation if new prototypes are continuously acquired in the course of time. Hebb's law needs thus to be revised in order to allow new prototypes to be stored at the expense of the older ones. Some approaches related with this problem has been developed. We have developed a new approach of implementing a Recursive Neural Network in order to sob/e these kind of problems, the synaptic matrix is obtained superposing the components of the prototype vectors over the vértices of a Graph which may be interpreted as a coloring of the Graph. When training is finished the adjacency matrix of the Resulting Graph or matrix of weights presents certain properties for which it may be called a tetrahedral matrix The energy associated to any possible state of the net is represented as a point (a,b) in R2. Every one of the energy points associated with state-vectors having the same Hamming distance to the zero vector are located over the same energy Une in R2. The state-vector space may be then classified in n classes according to the n different possible distances firom any of the state-vectors to the zero vector The (n x n) matrix of weights may also be reduced to a n-vector of weights, in this way the computational time and the memory space required for obtaining the weights is optimized and simplified. In the recall stage, a parameter vectora is introduced, this parameter is used for controlling the capacity of the net: it may be proved that the bigger is the r, component of J, the lower is the number of fixed points located in the r¡ energy line. Once the capacity of the net has been controlled by the ex parameter, we introduced other parameter, obtained as the relative weight vector deviation parameter, in order to reduce the number of spurious states. All along the present text, we have also developed an example, which serves as a prove for the theoretical results, the algorithms are shown in a pseudocode language in the text, these algorithm so as the graphics have been developed also using the Mathematica 2.2. mathematical package which are shown in a supplementary volume of the text.
Resumo:
Quantum mechanics associate to some symplectic manifolds M a quantum model Q(M), which is a Hilbert space. The space Q(M) is the quantum mechanical analogue of the classical phase space M. We discuss here relations between the volume of M and the dimension of the vector space Q(M). Analogues for convex polyhedra are considered.
Resumo:
This paper provides new versions of the Farkas lemma characterizing those inequalities of the form f(x) ≥ 0 which are consequences of a composite convex inequality (S ◦ g)(x) ≤ 0 on a closed convex subset of a given locally convex topological vector space X, where f is a proper lower semicontinuous convex function defined on X, S is an extended sublinear function, and g is a vector-valued S-convex function. In parallel, associated versions of a stable Farkas lemma, considering arbitrary linear perturbations of f, are also given. These new versions of the Farkas lemma, and their corresponding stable forms, are established under the weakest constraint qualification conditions (the so-called closedness conditions), and they are actually equivalent to each other, as well as equivalent to an extended version of the so-called Hahn–Banach–Lagrange theorem, and its stable version, correspondingly. It is shown that any of them implies analytic and algebraic versions of the Hahn–Banach theorem and the Mazur–Orlicz theorem for extended sublinear functions.
Resumo:
This paper proves that every zero of any n th , n ≥ 2, partial sum of the Riemann zeta function provides a vector space of basic solutions of the functional equation f(x)+f(2x)+⋯+f(nx)=0,x∈R . The continuity of the solutions depends on the sign of the real part of each zero.
Resumo:
Paraconsistent logic admits that the contradiction can be true. Let p be the truth values and P be a proposition. In paraconsistent logic the truth values of contradiction is . This equation has no real roots but admits complex roots . This is the result which leads to develop a multivalued logic to complex truth values. The sum of truth values being isomorphic to the vector of the plane, it is natural to relate the function V to the metric of the vector space R2. We will adopt as valuations the norms of vectors. The main objective of this paper is to establish a theory of truth-value evaluation for paraconsistent logics with the goal of using in analyzing ideological, mythical, religious and mystic belief systems.
Resumo:
Given a convex optimization problem (P) in a locally convex topological vector space X with an arbitrary number of constraints, we consider three possible dual problems of (P), namely, the usual Lagrangian dual (D), the perturbational dual (Q), and the surrogate dual (Δ), the last one recently introduced in a previous paper of the authors (Goberna et al., J Convex Anal 21(4), 2014). As shown by simple examples, these dual problems may be all different. This paper provides conditions ensuring that inf(P)=max(D), inf(P)=max(Q), and inf(P)=max(Δ) (dual equality and existence of dual optimal solutions) in terms of the so-called closedness regarding to a set. Sufficient conditions guaranteeing min(P)=sup(Q) (dual equality and existence of primal optimal solutions) are also provided, for the nominal problems and also for their perturbational relatives. The particular cases of convex semi-infinite optimization problems (in which either the number of constraints or the dimension of X, but not both, is finite) and linear infinite optimization problems are analyzed. Finally, some applications to the feasibility of convex inequality systems are described.