992 resultados para S-matrix theory
Resumo:
Les algorithmes d'apprentissage profond forment un nouvel ensemble de méthodes puissantes pour l'apprentissage automatique. L'idée est de combiner des couches de facteurs latents en hierarchies. Cela requiert souvent un coût computationel plus elevé et augmente aussi le nombre de paramètres du modèle. Ainsi, l'utilisation de ces méthodes sur des problèmes à plus grande échelle demande de réduire leur coût et aussi d'améliorer leur régularisation et leur optimization. Cette thèse adresse cette question sur ces trois perspectives. Nous étudions tout d'abord le problème de réduire le coût de certains algorithmes profonds. Nous proposons deux méthodes pour entrainer des machines de Boltzmann restreintes et des auto-encodeurs débruitants sur des distributions sparses à haute dimension. Ceci est important pour l'application de ces algorithmes pour le traitement de langues naturelles. Ces deux méthodes (Dauphin et al., 2011; Dauphin and Bengio, 2013) utilisent l'échantillonage par importance pour échantilloner l'objectif de ces modèles. Nous observons que cela réduit significativement le temps d'entrainement. L'accéleration atteint 2 ordres de magnitude sur plusieurs bancs d'essai. Deuxièmement, nous introduisont un puissant régularisateur pour les méthodes profondes. Les résultats expérimentaux démontrent qu'un bon régularisateur est crucial pour obtenir de bonnes performances avec des gros réseaux (Hinton et al., 2012). Dans Rifai et al. (2011), nous proposons un nouveau régularisateur qui combine l'apprentissage non-supervisé et la propagation de tangente (Simard et al., 1992). Cette méthode exploite des principes géometriques et permit au moment de la publication d'atteindre des résultats à l'état de l'art. Finalement, nous considérons le problème d'optimiser des surfaces non-convexes à haute dimensionalité comme celle des réseaux de neurones. Tradionellement, l'abondance de minimum locaux était considéré comme la principale difficulté dans ces problèmes. Dans Dauphin et al. (2014a) nous argumentons à partir de résultats en statistique physique, de la théorie des matrices aléatoires, de la théorie des réseaux de neurones et à partir de résultats expérimentaux qu'une difficulté plus profonde provient de la prolifération de points-selle. Dans ce papier nous proposons aussi une nouvelle méthode pour l'optimisation non-convexe.
Resumo:
This thesis Entitled On Infinite graphs and related matrices.ln the last two decades (iraph theory has captured wide attraction as a Mathematical model for any system involving a binary relation. The theory is intimately related to many other branches of Mathematics including Matrix Theory Group theory. Probability. Topology and Combinatorics . and has applications in many other disciplines..Any sort of study on infinite graphs naturally involves an attempt to extend the well known results on the much familiar finite graphs. A graph is completely determined by either its adjacencies or its incidences. A matrix can convey this information completely. This makes a proper labelling of the vertices. edges and any other elements considered, an inevitable process. Many types of labelling of finite graphs as Cordial labelling, Egyptian labelling, Arithmetic labeling and Magical labelling are available in the literature. The number of matrices associated with a finite graph are too many For a study ofthis type to be exhaustive. A large number of theorems have been established by various authors for finite matrices. The extension of these results to infinite matrices associated with infinite graphs is neither obvious nor always possible due to convergence problems. In this thesis our attempt is to obtain theorems of a similar nature on infinite graphs and infinite matrices. We consider the three most commonly used matrices or operators, namely, the adjacency matrix
Polarization and correlation phenomena in the radiative electron capture by bare highly-charged ions
Resumo:
In dieser Arbeit wird die Wechselwirkung zwischen einem Photon und einem Elektron im starken Coulombfeld eines Atomkerns am Beispiel des radiativen Elektroneneinfangs beim Stoß hochgeladener Teilchen untersucht. In den letzten Jahren wurde dieser Ladungsaustauschprozess insbesondere für relativistische Ion–Atom–Stöße sowohl experimentell als auch theoretisch ausführlich erforscht. In Zentrum standen dabei haupsächlich die totalen und differentiellen Wirkungsquerschnitte. In neuerer Zeit werden vermehrt Spin– und Polarisationseffekte sowie Korrelationseffekte bei diesen Stoßprozessen diskutiert. Man erwartet, dass diese sehr empfindlich auf relativistische Effekte im Stoß reagieren und man deshalb eine hervorragende Methode zu deren Bestimmung erhält. Darüber hinaus könnten diese Messungen auch indirekt dazu führen, dass man die Polarisation des Ionenstrahls bestimmen kann. Damit würden sich neue experimentelle Möglichkeiten sowohl in der Atom– als auch der Kernphysik ergeben. In dieser Dissertation werden zunächst diese ersten Untersuchungen zu den Spin–, Polarisations– und Korrelationseffekten systematisch zusammengefasst. Die Dichtematrixtheorie liefert hierzu die geeignete Methode. Mit dieser Methode werden dann die allgemeinen Gleichungen für die Zweistufen–Rekombination hergeleitet. In diesem Prozess wird ein Elektron zunächst radiativ in einen angeregten Zustand eingefangen, der dann im zweiten Schritt unter Emission des zweiten (charakteristischen) Photons in den Grundzustand übergeht. Diese Gleichungen können natürlich auf beliebige Mehrstufen– sowie Einstufen–Prozesse erweitert werden. Im direkten Elektroneneinfang in den Grundzustand wurde die ”lineare” Polarisation der Rekombinationsphotonen untersucht. Es wurde gezeigt, dass man damit eine Möglichkeit zur Bestimmung der Polarisation der Teilchen im Eingangskanal des Schwerionenstoßes hat. Rechnungen zur Rekombination bei nackten U92+ Projektilen zeigen z. B., dass die Spinpolarisation der einfallenden Elektronen zu einer Drehung der linearen Polarisation der emittierten Photonen aus der Streuebene heraus führt. Diese Polarisationdrehung kann mit neu entwickelten orts– und polarisationsempfindlichen Festkörperdetektoren gemessen werden. Damit erhält man eine Methode zur Messung der Polarisation der einfallenden Elektronen und des Ionenstrahls. Die K–Schalen–Rekombination ist ein einfaches Beispiel eines Ein–Stufen–Prozesses. Das am besten bekannte Beispiel der Zwei–Stufen–Rekombination ist der Elektroneneinfang in den 2p3/2–Zustand des nackten Ions und anschließendem Lyman–1–Zerfall (2p3/2 ! 1s1/2). Im Rahmen der Dichte–Matrix–Theorie wurden sowohl die Winkelverteilung als auch die lineare Polarisation der charakteristischen Photonen untersucht. Beide (messbaren) Größen werden beträchtlich durch die Interferenz des E1–Kanals (elektrischer Dipol) mit dem viel schwächeren M2–Kanal (magnetischer Quadrupol) beeinflusst. Für die Winkelverteilung des Lyman–1 Zerfalls im Wasserstoff–ähnlichen Uran führt diese E1–M2–Mischung zu einem 30%–Effekt. Die Berücksichtigung dieser Interferenz behebt die bisher vorhandene Diskrepanz von Theorie und Experiment beim Alignment des 2p3/2–Zustands. Neben diesen Ein–Teichen–Querschnitten (Messung des Einfangphotons oder des charakteristischen Photons) wurde auch die Korrelation zwischen den beiden berechnet. Diese Korrelationen sollten in X–X–Koinzidenz–Messungen beobbachtbar sein. Der Schwerpunkt dieser Untersuchungen lag bei der Photon–Photon–Winkelkorrelation, die experimentell am einfachsten zu messen ist. In dieser Arbeit wurden ausführliche Berechnungen der koinzidenten X–X–Winkelverteilungen beim Elektroneneinfang in den 2p3/2–Zustand des nackten Uranions und beim anschließenden Lyman–1–Übergang durchgeführt. Wie bereits erwähnt, hängt die Winkelverteilung des charakteristischen Photons nicht nur vom Winkel des Rekombinationsphotons, sondern auch stark von der Spin–Polarisation der einfallenden Teilchen ab. Damit eröffnet sich eine zweite Möglichkeit zur Messung der Polaristion des einfallenden Ionenstrahls bzw. der einfallenden Elektronen.
Resumo:
Este documento consiste en un plan exportador para Escobar y Martínez, especialmente para su marca GOLTY. La primera parte es el análisis y diagnóstico de la compañía con base en la matriz del Boston Consulting Group, seguida del análisis de mercado, realizado con fuentes secundarias que logra identificar cómo el Reino Unido es un mercado con una conciencia creciente sobre los deportes, dispuestos a pagar por alta calidad, similar a lo sucedido con España y Francia La selección de mercados se realiza por medio de la matriz de Proexport con base en los resultados de la investigación con fuentes secundarias. Finalmente, se usa una de las más importantes teorías de la administración, como lo es la teoría matricial, para sugerir algunas estrategias a seguir con el fin de tener un proceso de exportación exitoso hacia los mercados del Reino Unido, España y Francia.
Resumo:
Implicit dynamic-algebraic equations, known in control theory as descriptor systems, arise naturally in many applications. Such systems may not be regular (often referred to as singular). In that case the equations may not have unique solutions for consistent initial conditions and arbitrary inputs and the system may not be controllable or observable. Many control systems can be regularized by proportional and/or derivative feedback.We present an overview of mathematical theory and numerical techniques for regularizing descriptor systems using feedback controls. The aim is to provide stable numerical techniques for analyzing and constructing regular control and state estimation systems and for ensuring that these systems are robust. State and output feedback designs for regularizing linear time-invariant systems are described, including methods for disturbance decoupling and mixed output problems. Extensions of these techniques to time-varying linear and nonlinear systems are discussed in the final section.
Resumo:
In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.
Resumo:
This dissertation presents two papers on how to deal with simple systemic risk measures to assess portfolio risk characteristics. The first paper deals with the Granger-causation of systemic risk indicators based in correlation matrices in stock returns. Special focus is devoted to the Eigenvalue Entropy as some previous literature indicated strong re- sults, but not considering different macroeconomic scenarios; the Index Cohesion Force and the Absorption Ratio are also considered. Considering the S&P500, there is not ev- idence of Granger-causation from Eigenvalue Entropies and the Index Cohesion Force. The Absorption Ratio Granger-caused both the S&P500 and the VIX index, being the only simple measure that passed this test. The second paper develops this measure to capture the regimes underlying the American stock market. New indicators are built using filtering and random matrix theory. The returns of the S&P500 is modelled as a mixture of normal distributions. The activation of each normal distribution is governed by a Markov chain with the transition probabilities being a function of the indicators. The model shows that using a Herfindahl-Hirschman Index of the normalized eigenval- ues exhibits best fit to the returns from 1998-2013.
Resumo:
In the sociability of the capital, the challenges to the consolidation of social security as a public policy become expressive, which has implications for social security services, particularly for Social Works who works for the security and fulfillment of social rights. Therefore, in this context of denial of these rights becomes relevant the work of social worker, as a professional committed to the ethical-political project and the Matrix Theory and Methodology of Social work, which potentiate the action able to establish professional articulated strategies for the strengthening of collective struggles for equality in society. Thus, this study examines the instrumentality of social work in the contemporary world and its contribution to the realization of rights. For this, we conducted a literature review, using authors dealing with the issue, as Behring (2008), Boschetti (2003), Mota (1995), Guerra (2007) among others, as well as documentary research through laws, decrees, instructions Normative, Internal Guidelines, and especially the analysis of the Matrix itself of Social Work in welfare. We use also of paramount importance to our analysis - the field research, using techniques such as semi-structured interview and questionnaire. The research enables the identification of important aspects of the subject studied, as the understanding of professionals about the instrumentality of Social Works in its ethical-political aspects, both theoretical and methodological and technicaloperative. The demands made by the managers for the profession on the sociooccupational have extrapolated the powers and duties of the Law Regulating the Profession and the Matrix of Social Work in welfare. The subjects of this study emphasize the role of social category of the National Institute of Social Security and the Federal Council of Social Service in defense of Social Works. The knowledge of social and institutional framework is critical to building control strategies that strengthen social security and public policy, the guarantor of social rights for workers in Brazil
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We use the framework of noncommutative geometry to define a discrete model for fluctuating geometry. Instead of considering ordinary geometry and its metric fluctuations, we consider generalized geometries where topology and dimension can also fluctuate. The model describes the geometry of spaces with a countable number n of points. The spectral principle of Connes and Chamseddine is used to define dynamics. We show that this simple model has two phases. The expectation value
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
ims: Periodic leg movements in sleep (PLMS) are a frequent finding in polysomnography. Most patients with restless legs syndrome (RLS) display PLMS. However, since PLMS are also often recorded in healthy elderly subjects, the clinical significance of PLMS is still discussed controversially. Leg movements are seen concurrently with arousals in obstructive sleep apnoea (OSA) may also appear periodically. Quantitative assessment of the periodicity of LM/PLM as measured by inter movement intervals (IMI) is difficult. This is mainly due to influencing factors like sleep architecture and sleep stage, medication, inter and intra patient variability, the arbitrary amplitude and sequence criteria which tend to broaden the IMI distributions or make them even multi-modal. Methods: Here a statistical method is presented that enables eliminating such effects from the raw data before analysing the statistics of IMI. Rather than studying the absolute size of IMI (measured in seconds) we focus on the shape of their distribution (suitably normalized IMI). To this end we employ methods developed in Random Matrix Theory (RMT). Patients: The periodicity of leg movements (LM) of four patient groups (10 to 15 each) showing LM without PLMS (group 1), OSA without PLMS (group 2), PLMS and OSA (group 3) as well as PLMS without OSA (group 4) are compared. Results: The IMI of patients without PLMS (groups 1 and 2) and with PLMS (groups 3 and 4) are statistically different. In patients without PLMS the distribution of normalized IMI resembles closely the one of random events. In contrary IMI of PLMS patients show features of periodic systems (e.g. a pendulum) when studied in normalized manner. Conclusions: For quantifying PLMS periodicity proper normalization of the IMI is crucial. Without this procedure important features are hidden when grouping LM/PLM over whole nights or across patients. The clinical significance of PLMS might be eluded when properly separating random LM from LM that show features of periodic systems.
Resumo:
We calculate near-threshold bound states and Feshbach resonance positions for atom–rigid-rotor models of the highly anisotropic systems Li+CaH and Li+CaF. We perform statistical analysis on the resonance positions to compare with the predictions of random matrix theory. For Li+CaH with total angular momentum J=0 we find fully chaotic behavior in both the nearest-neighbor spacing distribution and the level number variance. However, for J>0 we find different behavior due to the presence of a nearly conserved quantum number. Li+CaF (J=0) also shows apparently reduced levels of chaotic behavior despite its stronger effective coupling. This may indicate the development of another good quantum number relating to a bending motion of the complex. However, continuously varying the rotational constant over a wide range shows unexpected structure in the degree of chaotic behavior, including a dramatic reduction around the rotational constant of CaF. This demonstrates the complexity of the relationship between coupling and chaotic behavior.
Resumo:
Selon la philosophie de Katz et Sarnak, la distribution des zéros des fonctions $L$ est prédite par le comportement des valeurs propres de matrices aléatoires. En particulier, le comportement des zéros près du point central révèle le type de symétrie de la famille de fonctions $L$. Une fois la symétrie identifiée, la philosophie de Katz et Sarnak conjecture que plusieurs statistiques associées aux zéros seront modélisées par les valeurs propres de matrices aléatoires du groupe correspondant. Ce mémoire étudiera la distribution des zéros près du point central de la famille des courbes elliptiques sur $\mathbb{Q}[i]$. Brumer a effectué ces calculs en 1992 sur la famille de courbes elliptiques sur $\mathbb{Q}$. Les nouvelles problématiques reliées à la généralisation de ses travaux vers un corps de nombres seront mises en évidence
Resumo:
Selon la philosophie de Katz et Sarnak, la distribution des zéros des fonctions $L$ est prédite par le comportement des valeurs propres de matrices aléatoires. En particulier, le comportement des zéros près du point central révèle le type de symétrie de la famille de fonctions $L$. Une fois la symétrie identifiée, la philosophie de Katz et Sarnak conjecture que plusieurs statistiques associées aux zéros seront modélisées par les valeurs propres de matrices aléatoires du groupe correspondant. Ce mémoire étudiera la distribution des zéros près du point central de la famille des courbes elliptiques sur $\mathbb{Q}[i]$. Brumer a effectué ces calculs en 1992 sur la famille de courbes elliptiques sur $\mathbb{Q}$. Les nouvelles problématiques reliées à la généralisation de ses travaux vers un corps de nombres seront mises en évidence