931 resultados para upper bound
Resumo:
Ce mémoire étudie l'algorithme d'amplification de l'amplitude et ses applications dans le domaine de test de propriété. On utilise l'amplification de l'amplitude pour proposer le plus efficace algorithme quantique à ce jour qui teste la linéarité de fonctions booléennes et on généralise notre nouvel algorithme pour tester si une fonction entre deux groupes abéliens finis est un homomorphisme. Le meilleur algorithme quantique connu qui teste la symétrie de fonctions booléennes est aussi amélioré et l'on utilise ce nouvel algorithme pour tester la quasi-symétrie de fonctions booléennes. Par la suite, on approfondit l'étude du nombre de requêtes à la boîte noire que fait l'algorithme d'amplification de l'amplitude pour amplitude initiale inconnue. Une description rigoureuse de la variable aléatoire représentant ce nombre est présentée, suivie du résultat précédemment connue de la borne supérieure sur l'espérance. Suivent de nouveaux résultats sur la variance de cette variable. Il est notamment montré que, dans le cas général, la variance est infinie, mais nous montrons aussi que, pour un choix approprié de paramètres, elle devient bornée supérieurement.
Resumo:
Nous introduisons un nouveau modèle de la communication à deux parties dans lequel nous nous intéressons au temps que prennent deux participants à effectuer une tâche à travers un canal avec délai d. Nous établissons quelques bornes supérieures et inférieures et comparons ce nouveau modèle aux modèles de communication classiques et quantiques étudiés dans la littérature. Nous montrons que la complexité de la communication d’une fonction sur un canal avec délai est bornée supérieurement par sa complexité de la communication modulo un facteur multiplicatif d/ lg d. Nous présentons ensuite quelques exemples de fonctions pour lesquelles une stratégie astucieuse se servant du temps mort confère un avantage sur une implémentation naïve d’un protocole de communication optimal en terme de complexité de la communication. Finalement, nous montrons qu’un canal avec délai permet de réaliser un échange de bit cryptographique, mais que, par lui-même, est insuffisant pour réaliser la primitive cryptographique de transfert équivoque.
Resumo:
L’objet du travail est d’étudier les prolongements de sous-copules. Un cas important de l’utilisation de tels prolongements est l’estimation non paramétrique d’une copule par le lissage d’une sous-copule (la copule empirique). Lorsque l’estimateur obtenu est une copule, cet estimateur est un prolongement de la souscopule. La thèse présente au chapitre 2 la construction et la convergence uniforme d’un estimateur bona fide d’une copule ou d’une densité de copule. Cet estimateur est un prolongement de type copule empirique basé sur le lissage par le produit tensoriel de fonctions de répartition splines. Le chapitre 3 donne la caractérisation de l’ensemble des prolongements possibles d’une sous-copule. Ce sujet a été traité par le passé; mais les constructions proposées ne s’appliquent pas à la dépendance dans des espaces très généraux. Le chapitre 4 s’attèle à résoudre le problème suivant posé par [Carley, 2002]. Il s’agit de trouver la borne supérieure des prolongements en dimension 3 d’une sous-copule de domaine fini.
Resumo:
A profile is a finite sequence of vertices of a graph. The set of all vertices of the graph which minimises the sum of the distances to the vertices of the profile is the median of the profile. Any subset of the vertex set such that it is the median of some profile is called a median set. The number of median sets of a graph is defined to be the median number of the graph. In this paper, we identify the median sets of various classes of graphs such as Kp − e, Kp,q forP > 2, and wheel graph and so forth. The median numbers of these graphs and hypercubes are found out, and an upper bound for the median number of even cycles is established.We also express the median number of a product graph in terms of the median number of their factors.
Resumo:
The present thesis is about the inverse problem in differential Galois Theory. Given a differential field, the inverse problem asks which linear algebraic groups can be realized as differential Galois groups of Picard-Vessiot extensions of this field. In this thesis we will concentrate on the realization of the classical groups as differential Galois groups. We introduce a method for a very general realization of these groups. This means that we present for the classical groups of Lie rank $l$ explicit linear differential equations where the coefficients are differential polynomials in $l$ differential indeterminates over an algebraically closed field of constants $C$, i.e. our differential ground field is purely differential transcendental over the constants. For the groups of type $A_l$, $B_l$, $C_l$, $D_l$ and $G_2$ we managed to do these realizations at the same time in terms of Abhyankar's program 'Nice Equations for Nice Groups'. Here the choice of the defining matrix is important. We found out that an educated choice of $l$ negative roots for the parametrization together with the positive simple roots leads to a nice differential equation and at the same time defines a sufficiently general element of the Lie algebra. Unfortunately for the groups of type $F_4$ and $E_6$ the linear differential equations for such elements are of enormous length. Therefore we keep in the case of $F_4$ and $E_6$ the defining matrix differential equation which has also an easy and nice shape. The basic idea for the realization is the application of an upper and lower bound criterion for the differential Galois group to our parameter equations and to show that both bounds coincide. An upper and lower bound criterion can be found in literature. Here we will only use the upper bound, since for the application of the lower bound criterion an important condition has to be satisfied. If the differential ground field is $C_1$, e.g., $C(z)$ with standard derivation, this condition is automatically satisfied. Since our differential ground field is purely differential transcendental over $C$, we have no information whether this condition holds or not. The main part of this thesis is the development of an alternative lower bound criterion and its application. We introduce the specialization bound. It states that the differential Galois group of a specialization of the parameter equation is contained in the differential Galois group of the parameter equation. Thus for its application we need a differential equation over $C(z)$ with given differential Galois group. A modification of a result from Mitschi and Singer yields such an equation over $C(z)$ up to differential conjugation, i.e. up to transformation to the required shape. The transformation of their equation to a specialization of our parameter equation is done for each of the above groups in the respective transformation lemma.
Resumo:
In der vorliegenden Dissertation werden Systeme von parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (engl.: systems of parallel communicating restarting automata; abgekürzt PCRA-Systeme) vorgestellt und untersucht. Dabei werden zwei bekannte Konzepte aus den Bereichen Formale Sprachen und Automatentheorie miteinander vescrknüpft: das Modell der Restart-Automaten und die sogenannten PC-Systeme (systems of parallel communicating components). Ein PCRA-System besteht aus endlich vielen Restart-Automaten, welche einerseits parallel und unabhängig voneinander lokale Berechnungen durchführen und andererseits miteinander kommunizieren dürfen. Die Kommunikation erfolgt dabei durch ein festgelegtes Kommunikationsprotokoll, das mithilfe von speziellen Kommunikationszuständen realisiert wird. Ein wesentliches Merkmal hinsichtlich der Kommunikationsstruktur in Systemen von miteinander kooperierenden Komponenten ist, ob die Kommunikation zentralisiert oder nichtzentralisiert erfolgt. Während in einer nichtzentralisierten Kommunikationsstruktur jede Komponente mit jeder anderen Komponente kommunizieren darf, findet jegliche Kommunikation innerhalb einer zentralisierten Kommunikationsstruktur ausschließlich mit einer ausgewählten Master-Komponente statt. Eines der wichtigsten Resultate dieser Arbeit zeigt, dass zentralisierte Systeme und nichtzentralisierte Systeme die gleiche Berechnungsstärke besitzen (das ist im Allgemeinen bei PC-Systemen nicht so). Darüber hinaus bewirkt auch die Verwendung von Multicast- oder Broadcast-Kommunikationsansätzen neben Punkt-zu-Punkt-Kommunikationen keine Erhöhung der Berechnungsstärke. Desweiteren wird die Ausdrucksstärke von PCRA-Systemen untersucht und mit der von PC-Systemen von endlichen Automaten und mit der von Mehrkopfautomaten verglichen. PC-Systeme von endlichen Automaten besitzen bekanntermaßen die gleiche Ausdrucksstärke wie Einwegmehrkopfautomaten und bilden eine untere Schranke für die Ausdrucksstärke von PCRA-Systemen mit Einwegkomponenten. Tatsächlich sind PCRA-Systeme auch dann stärker als PC-Systeme von endlichen Automaten, wenn die Komponenten für sich genommen die gleiche Ausdrucksstärke besitzen, also die regulären Sprachen charakterisieren. Für PCRA-Systeme mit Zweiwegekomponenten werden als untere Schranke die Sprachklassen der Zweiwegemehrkopfautomaten im deterministischen und im nichtdeterministischen Fall gezeigt, welche wiederum den bekannten Komplexitätsklassen L (deterministisch logarithmischer Platz) und NL (nichtdeterministisch logarithmischer Platz) entsprechen. Als obere Schranke wird die Klasse der kontextsensitiven Sprachen gezeigt. Außerdem werden Erweiterungen von Restart-Automaten betrachtet (nonforgetting-Eigenschaft, shrinking-Eigenschaft), welche bei einzelnen Komponenten eine Erhöhung der Berechnungsstärke bewirken, in Systemen jedoch deren Stärke nicht erhöhen. Die von PCRA-Systemen charakterisierten Sprachklassen sind unter diversen Sprachoperationen abgeschlossen und einige Sprachklassen sind sogar abstrakte Sprachfamilien (sogenannte AFL's). Abschließend werden für PCRA-Systeme spezifische Probleme auf ihre Entscheidbarkeit hin untersucht. Es wird gezeigt, dass Leerheit, Universalität, Inklusion, Gleichheit und Endlichkeit bereits für Systeme mit zwei Restart-Automaten des schwächsten Typs nicht semientscheidbar sind. Für das Wortproblem wird gezeigt, dass es im deterministischen Fall in quadratischer Zeit und im nichtdeterministischen Fall in exponentieller Zeit entscheidbar ist.
Resumo:
Gegenstand der vorliegenden Arbeit ist die Analyse verschiedener Formalismen zur Berechnung binärer Wortrelationen. Dabei ist die Grundlage aller hier ausgeführten Betrachtungen das Modell der Restart-Automaten, welches 1995 von Jancar et. al. eingeführt wurde. Zum einen wird das bereits für Restart-Automaten bekannte Konzept der input/output- und proper-Relationen weiterführend untersucht, sowie auf Systeme von zwei parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (PC-Systeme) erweitert. Zum anderen wird eine Variante der Restart-Automaten eingeführt, die sich an klassischen Automatenmodellen zur Berechnung von Relationen orientiert. Mit Hilfe dieser Mechanismen kann gezeigt werden, dass einige Klassen, die durch input/output- und proper-Relationen von Restart Automaten definiert werden, mit den traditionellen Relationsklassen der Rationalen Relationen und der Pushdown-Relationen übereinstimmen. Weiterhin stellt sich heraus, dass das Konzept der parallel kommunizierenden Automaten äußerst mächtig ist, da bereits die Klasse der proper-Relationen von monotonen PC-Systemen alle berechenbaren Relationen umfasst. Der Haupteil der Arbeit beschäftigt sich mit den so genannten Restart-Transducern, welche um eine Ausgabefunktion erweiterte Restart-Automaten sind. Es zeigt sich, dass sich insbesondere dieses Modell mit seinen verschiedenen Erweiterungen und Einschränkungen dazu eignet, eine umfassende Hierarchie von Relationsklassen zu etablieren. In erster Linie seien hier die verschiedenen Typen von monotonen Restart-Transducern erwähnt, mit deren Hilfe viele interessante neue und bekannte Relationsklassen innerhalb der längenbeschränkten Pushdown-Relationen charakterisiert werden. Abschließend wird, im Kontrast zu den vorhergehenden Modellen, das nicht auf Restart-Automaten basierende Konzept des Übersetzens durch Beobachtung ("Transducing by Observing") zur Relationsberechnung eingeführt. Dieser, den Restart-Transducern nicht unähnliche Mechanismus, wird im weitesten Sinne dazu genutzt, einen anderen Blickwinkel auf die von Restart-Transducern definierten Relationen einzunehmen, sowie eine obere Schranke für die Berechnungskraft der Restart-Transducer zu gewinnen.
Resumo:
The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.
Resumo:
When triangulating a belief network we aim to obtain a junction tree of minimum state space. Searching for the optimal triangulation can be cast as a search over all the permutations of the network's vaeriables. Our approach is to embed the discrete set of permutations in a convex continuous domain D. By suitably extending the cost function over D and solving the continous nonlinear optimization task we hope to obtain a good triangulation with respect to the aformentioned cost. In this paper we introduce an upper bound to the total junction tree weight as the cost function. The appropriatedness of this choice is discussed and explored by simulations. Then we present two ways of embedding the new objective function into continuous domains and show that they perform well compared to the best known heuristic.
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
In this paper extensions to an existing tracking algorithm are described. These extensions implement adaptive tracking constraints in the form of regional upper-bound displacements and an adaptive track smoothness constraint. Together, these constraints make the tracking algorithm more flexible than the original algorithm (which used fixed tracking parameters) and provide greater confidence in the tracking results. The result of applying the new algorithm to high-resolution ECMWF reanalysis data is shown as an example of its effectiveness.
Resumo:
There are various situations in which it is natural to ask whether a given collection of k functions, ρ j (r 1,…,r j ), j=1,…,k, defined on a set X, are the first k correlation functions of a point process on X. Here we describe some necessary and sufficient conditions on the ρ j ’s for this to be true. Our primary examples are X=ℝ d , X=ℤ d , and X an arbitrary finite set. In particular, we extend a result by Ambartzumian and Sukiasian showing realizability at sufficiently small densities ρ 1(r). Typically if any realizing process exists there will be many (even an uncountable number); in this case we prove, when X is a finite set, the existence of a realizing Gibbs measure with k body potentials which maximizes the entropy among all realizing measures. We also investigate in detail a simple example in which a uniform density ρ and translation invariant ρ 2 are specified on ℤ; there is a gap between our best upper bound on possible values of ρ and the largest ρ for which realizability can be established.
Resumo:
In the Radiative Atmospheric Divergence Using ARM Mobile Facility GERB and AMMA Stations (RADAGAST) project we calculate the divergence of radiative flux across the atmosphere by comparing fluxes measured at each end of an atmospheric column above Niamey, in the African Sahel region. The combination of broadband flux measurements from geostationary orbit and the deployment for over 12 months of a comprehensive suite of active and passive instrumentation at the surface eliminates a number of sampling issues that could otherwise affect divergence calculations of this sort. However, one sampling issue that challenges the project is the fact that the surface flux data are essentially measurements made at a point, while the top-of-atmosphere values are taken over a solid angle that corresponds to an area at the surface of some 2500 km2. Variability of cloud cover and aerosol loading in the atmosphere mean that the downwelling fluxes, even when averaged over a day, will not be an exact match to the area-averaged value over that larger area, although we might expect that it is an unbiased estimate thereof. The heterogeneity of the surface, for example, fixed variations in albedo, further means that there is a likely systematic difference in the corresponding upwelling fluxes. In this paper we characterize and quantify this spatial sampling problem. We bound the root-mean-square error in the downwelling fluxes by exploiting a second set of surface flux measurements from a site that was run in parallel with the main deployment. The differences in the two sets of fluxes lead us to an upper bound to the sampling uncertainty, and their correlation leads to another which is probably optimistic as it requires certain other conditions to be met. For the upwelling fluxes we use data products from a number of satellite instruments to characterize the relevant heterogeneities and so estimate the systematic effects that arise from the flux measurements having to be taken at a single point. The sampling uncertainties vary with the season, being higher during the monsoon period. We find that the sampling errors for the daily average flux are small for the shortwave irradiance, generally less than 5 W m−2, under relatively clear skies, but these increase to about 10 W m−2 during the monsoon. For the upwelling fluxes, again taking daily averages, systematic errors are of order 10 W m−2 as a result of albedo variability. The uncertainty on the longwave component of the surface radiation budget is smaller than that on the shortwave component, in all conditions, but a bias of 4 W m−2 is calculated to exist in the surface leaving longwave flux.
Resumo:
The experimental variogram computed in the usual way by the method of moments and the Haar wavelet transform are similar in that they filter data and yield informative summaries that may be interpreted. The variogram filters out constant values; wavelets can filter variation at several spatial scales and thereby provide a richer repertoire for analysis and demand no assumptions other than that of finite variance. This paper compares the two functions, identifying that part of the Haar wavelet transform that gives it its advantages. It goes on to show that the generalized variogram of order k=1, 2, and 3 filters linear, quadratic, and cubic polynomials from the data, respectively, which correspond with more complex wavelets in Daubechies's family. The additional filter coefficients of the latter can reveal features of the data that are not evident in its usual form. Three examples in which data recorded at regular intervals on transects are analyzed illustrate the extended form of the variogram. The apparent periodicity of gilgais in Australia seems to be accentuated as filter coefficients are added, but otherwise the analysis provides no new insight. Analysis of hyerpsectral data with a strong linear trend showed that the wavelet-based variograms filtered it out. Adding filter coefficients in the analysis of the topsoil across the Jurassic scarplands of England changed the upper bound of the variogram; it then resembled the within-class variogram computed by the method of moments. To elucidate these results, we simulated several series of data to represent a random process with values fluctuating about a mean, data with long-range linear trend, data with local trend, and data with stepped transitions. The results suggest that the wavelet variogram can filter out the effects of long-range trend, but not local trend, and of transitions from one class to another, as across boundaries.