15 resultados para Lagrangian bounds

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation analyses the notions of progress and common good in Swedish political language during the Age of Liberty (1719 1772). The method used is conceptual analysis, but this study is also a contribution to the history of political ideas and political culture, aiming at a broader understanding of how the bounds of political community were conceptualised and represented in eighteenth-century Sweden. The research is based on the official documents of the regime, such as the fundamental laws and the solemn speeches made at the opening and closing of the Diet, on normative or alternative descriptions of society such as history works and economic literature, and on practical political writings by the Diet and its members. The rhetoric of common good and particular interest is thus examined both in its consensual and theoretical contexts and in practical politics. Central political issues addressed include the extent of economic liberties, the question of freedom to print, the meaning of privilege, the position of particular estates or social groups and the economic interests of particular areas or persons. This research shows that the modern Swedish word for progress (framsteg) was still only rarely used in the eighteenth century, while the notion of progress, growth and success existed in a variety of closely related terms and metaphorical expressions. The more traditional concept of common good (allmänna bästa) was used in several variants, some of which explicitly related to utility and interest. The combination of public utility and private interest in political discourse challenged traditional ideals of political morality, where virtue had been the fundament of common good. The progress of society was also presented as being linked to the progress of liberty, knowledge and wealth in a way that can be described as characteristic of the Age of Enlightenment but which also points at the appearance of early liberal thought.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quasiconformal mappings are natural generalizations of conformal mappings. They are homeomorphisms with 'bounded distortion' of which there exist several approaches. In this work we study dimension distortion properties of quasiconformal mappings both in the plane and in higher dimensional Euclidean setting. The thesis consists of a summary and three research articles. A basic property of quasiconformal mappings is the local Hölder continuity. It has long been conjectured that this regularity holds at the Sobolev level (Gehring's higher integrabilty conjecture). Optimal regularity would also provide sharp bounds for the distortion of Hausdorff dimension. The higher integrability conjecture was solved in the plane by Astala in 1994 and it is still open in higher dimensions. Thus in the plane we have a precise description how Hausdorff dimension changes under quasiconformal deformations for general sets. The first two articles contribute to two remaining issues in the planar theory. The first one concerns distortion of more special sets, for rectifiable sets we expect improved bounds to hold. The second issue consists of understanding distortion of dimension on a finer level, namely on the level of Hausdorff measures. In the third article we study flatness properties of quasiconformal images of spheres in a quantitative way. These also lead to nontrivial bounds for their Hausdorff dimension even in the n-dimensional case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models of Maximal Flavor Violation (MxFV) in elementary particle physics may contain at least one new scalar SU$(2)$ doublet field $\Phi_{FV} = (\eta^0,\eta^+)$ that couples the first and third generation quarks ($q_1,q_3$) via a Lagrangian term $\mathcal{L}_{FV} = \xi_{13} \Phi_{FV} q_1 q_3$. These models have a distinctive signature of same-charge top-quark pairs and evade flavor-changing limits from meson mixing measurements. Data corresponding to 2 fb$^{-1}$ collected by the CDF II detector in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV are analyzed for evidence of the MxFV signature. For a neutral scalar $\eta^0$ with $m_{\eta^0} = 200$ GeV/$c^2$ and coupling $\xi_{13}=1$, $\sim$ 11 signal events are expected over a background of $2.1 \pm 1.8$ events. Three events are observed in the data, consistent with background expectations, and limits are set on the coupling $\xi_{13}$ for $m_{\eta^0} = 180-300$ GeV/$c^2$.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-similarity, a concept taken from mathematics, is gradually becoming a keyword in musicology. Although a polysemic term, self-similarity often refers to the multi-scalar feature repetition in a set of relationships, and it is commonly valued as an indication for musical coherence and consistency . This investigation provides a theory of musical meaning formation in the context of intersemiosis, that is, the translation of meaning from one cognitive domain to another cognitive domain (e.g. from mathematics to music, or to speech or graphic forms). From this perspective, the degree of coherence of a musical system relies on a synecdochic intersemiosis: a system of related signs within other comparable and correlated systems. This research analyzes the modalities of such correlations, exploring their general and particular traits, and their operational bounds. Looking forward in this direction, the notion of analogy is used as a rich concept through its two definitions quoted by the Classical literature: proportion and paradigm, enormously valuable in establishing measurement, likeness and affinity criteria. Using quantitative qualitative methods, evidence is presented to justify a parallel study of different modalities of musical self-similarity. For this purpose, original arguments by Benoît B. Mandelbrot are revised, alongside a systematic critique of the literature on the subject. Furthermore, connecting Charles S. Peirce s synechism with Mandelbrot s fractality is one of the main developments of the present study. This study provides elements for explaining Bolognesi s (1983) conjecture, that states that the most primitive, intuitive and basic musical device is self-reference, extending its functions and operations to self-similar surfaces. In this sense, this research suggests that, with various modalities of self-similarity, synecdochic intersemiosis acts as system of systems in coordination with greater or lesser development of structural consistency, and with a greater or lesser contextual dependence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An edge dominating set for a graph G is a set D of edges such that each edge of G is in D or adjacent to at least one edge in D. This work studies deterministic distributed approximation algorithms for finding minimum-size edge dominating sets. The focus is on anonymous port-numbered networks: there are no unique identifiers, but a node of degree d can refer to its neighbours by integers 1, 2, ..., d. The present work shows that in the port-numbering model, edge dominating sets can be approximated as follows: in d-regular graphs, to within 4 − 6/(d + 1) for an odd d and to within 4 − 2/d for an even d; and in graphs with maximum degree Δ, to within 4 − 2/(Δ − 1) for an odd Δ and to within 4 − 2/Δ for an even Δ. These approximation ratios are tight for all values of d and Δ: there are matching lower bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is a case study of applying nonparametric statistical methods to corpus data. We show how to use ideas from permutation testing to answer linguistic questions related to morphological productivity and type richness. In particular, we study the use of the suffixes -ity and -ness in the 17th-century part of the Corpus of Early English Correspondence within the framework of historical sociolinguistics. Our hypothesis is that the productivity of -ity, as measured by type counts, is significantly low in letters written by women. To test such hypotheses, and to facilitate exploratory data analysis, we take the approach of computing accumulation curves for types and hapax legomena. We have developed an open source computer program which uses Monte Carlo sampling to compute the upper and lower bounds of these curves for one or more levels of statistical significance. By comparing the type accumulation from women’s letters with the bounds, we are able to confirm our hypothesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A local algorithm with local horizon r is a distributed algorithm that runs in r synchronous communication rounds; here r is a constant that does not depend on the size of the network. As a consequence, the output of a node in a local algorithm only depends on the input within r hops from the node. We give tight bounds on the local horizon for a class of local algorithms for combinatorial problems on unit-disk graphs (UDGs). Most of our bounds are due to a refined analysis of existing approaches, while others are obtained by suggesting new algorithms. The algorithms we consider are based on network decompositions guided by a rectangular tiling of the plane. The algorithms are applied to matching, independent set, graph colouring, vertex cover, and dominating set. We also study local algorithms on quasi-UDGs, which are a popular generalisation of UDGs, aimed at more realistic modelling of communication between the network nodes. Analysing the local algorithms on quasi-UDGs allows one to assume that the nodes know their coordinates only approximately, up to an additive error. Despite the localisation error, the quality of the solution to problems on quasi-UDGs remains the same as for the case of UDGs with perfect location awareness. We analyse the increase in the local horizon that comes along with moving from UDGs to quasi-UDGs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.