904 resultados para Lipschitzian bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-similarity, a concept taken from mathematics, is gradually becoming a keyword in musicology. Although a polysemic term, self-similarity often refers to the multi-scalar feature repetition in a set of relationships, and it is commonly valued as an indication for musical coherence and consistency . This investigation provides a theory of musical meaning formation in the context of intersemiosis, that is, the translation of meaning from one cognitive domain to another cognitive domain (e.g. from mathematics to music, or to speech or graphic forms). From this perspective, the degree of coherence of a musical system relies on a synecdochic intersemiosis: a system of related signs within other comparable and correlated systems. This research analyzes the modalities of such correlations, exploring their general and particular traits, and their operational bounds. Looking forward in this direction, the notion of analogy is used as a rich concept through its two definitions quoted by the Classical literature: proportion and paradigm, enormously valuable in establishing measurement, likeness and affinity criteria. Using quantitative qualitative methods, evidence is presented to justify a parallel study of different modalities of musical self-similarity. For this purpose, original arguments by Benoît B. Mandelbrot are revised, alongside a systematic critique of the literature on the subject. Furthermore, connecting Charles S. Peirce s synechism with Mandelbrot s fractality is one of the main developments of the present study. This study provides elements for explaining Bolognesi s (1983) conjecture, that states that the most primitive, intuitive and basic musical device is self-reference, extending its functions and operations to self-similar surfaces. In this sense, this research suggests that, with various modalities of self-similarity, synecdochic intersemiosis acts as system of systems in coordination with greater or lesser development of structural consistency, and with a greater or lesser contextual dependence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

General relativity has very specific predictions for the gravitational waveforms from inspiralling compact binaries obtained using the post-Newtonian (PN) approximation. We investigate the extent to which the measurement of the PN coefficients, possible with the second generation gravitational-wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and the third generation gravitational-wave detectors such as the Einstein Telescope (ET), could be used to test post-Newtonian theory and to put bounds on a subclass of parametrized-post-Einstein theories which differ from general relativity in a parametrized sense. We demonstrate this possibility by employing the best inspiralling waveform model for nonspinning compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within the class of theories considered, Advanced LIGO can test the theory at 1.5PN and thus the leading tail term. Future observations of stellar mass black hole binaries by ET can test the consistency between the various PN coefficients in the gravitational-wave phasing over the mass range of 11-44M(circle dot). The choice of the lower frequency cutoff is important for testing post-Newtonian theory using the ET. The bias in the test arising from the assumption of nonspinning binaries is indicated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An edge dominating set for a graph G is a set D of edges such that each edge of G is in D or adjacent to at least one edge in D. This work studies deterministic distributed approximation algorithms for finding minimum-size edge dominating sets. The focus is on anonymous port-numbered networks: there are no unique identifiers, but a node of degree d can refer to its neighbours by integers 1, 2, ..., d. The present work shows that in the port-numbering model, edge dominating sets can be approximated as follows: in d-regular graphs, to within 4 − 6/(d + 1) for an odd d and to within 4 − 2/d for an even d; and in graphs with maximum degree Δ, to within 4 − 2/(Δ − 1) for an odd Δ and to within 4 − 2/Δ for an even Δ. These approximation ratios are tight for all values of d and Δ: there are matching lower bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is a case study of applying nonparametric statistical methods to corpus data. We show how to use ideas from permutation testing to answer linguistic questions related to morphological productivity and type richness. In particular, we study the use of the suffixes -ity and -ness in the 17th-century part of the Corpus of Early English Correspondence within the framework of historical sociolinguistics. Our hypothesis is that the productivity of -ity, as measured by type counts, is significantly low in letters written by women. To test such hypotheses, and to facilitate exploratory data analysis, we take the approach of computing accumulation curves for types and hapax legomena. We have developed an open source computer program which uses Monte Carlo sampling to compute the upper and lower bounds of these curves for one or more levels of statistical significance. By comparing the type accumulation from women’s letters with the bounds, we are able to confirm our hypothesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A local algorithm with local horizon r is a distributed algorithm that runs in r synchronous communication rounds; here r is a constant that does not depend on the size of the network. As a consequence, the output of a node in a local algorithm only depends on the input within r hops from the node. We give tight bounds on the local horizon for a class of local algorithms for combinatorial problems on unit-disk graphs (UDGs). Most of our bounds are due to a refined analysis of existing approaches, while others are obtained by suggesting new algorithms. The algorithms we consider are based on network decompositions guided by a rectangular tiling of the plane. The algorithms are applied to matching, independent set, graph colouring, vertex cover, and dominating set. We also study local algorithms on quasi-UDGs, which are a popular generalisation of UDGs, aimed at more realistic modelling of communication between the network nodes. Analysing the local algorithms on quasi-UDGs allows one to assume that the nodes know their coordinates only approximately, up to an additive error. Despite the localisation error, the quality of the solution to problems on quasi-UDGs remains the same as for the case of UDGs with perfect location awareness. We analyse the increase in the local horizon that comes along with moving from UDGs to quasi-UDGs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the ergodic control for a controlled nondegenerate diffusion when m other (m finite) ergodic costs are required to satisfy prescribed bounds. Under a condition on the cost functions that penalizes instability, the existence of an optimal stable Markov control is established by convex analytic arguments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observational studies indicate that the convective activity of the monsoon systems undergo intraseasonal variations with multi-week time scales. The zone of maximum monsoon convection exhibits substantial transient behavior with successive propagating from the North Indian Ocean to the heated continent. Over South Asia the zone achieves its maximum intensity. These propagations may extend over 3000 km in latitude and perhaps twice the distance in longitude and remain as coherent entities for periods greater than 2-3 weeks. Attempts to explain this phenomena using simple ocean-atmosphere models of the monsoon system had concluded that the interactive ground hydrology so modifies the total heating of the atmosphere that a steady state solution is not possible, thus promoting lateral propagation. That is, the ground hydrology forces the total heating of the atmosphere and the vertical velocity to be slightly out of phase, causing a migration of the convection towards the region of maximum heating. Whereas the lateral scale of the variations produced by the Webster (1983) model were essentially correct, they occurred at twice the frequency of the observed events and were formed near the coastal margin, rather than over the ocean. Webster's (1983) model used to pose the theories was deficient in a number of aspects. Particularly, both the ground moisture content and the thermal inertia of the model were severely underestimated. At the same time, the sea surface temperatures produced by the model between the equator and the model's land-sea boundary were far too cool. Both the atmosphere and the ocean model were modified to include a better hydrological cycle and ocean structure. The convective events produced by the modified model possessed the observed frequency and were generated well south of the coastline. The improved simulation of monsoon variability allowed the hydrological cycle feedback to be generalized. It was found that monsoon variability was constrained to lie within the bounds of a positive gradient of a convective intensity potential (I). The function depends primarily on the surface temperature, the availability of moisture and the stability of the lower atmosphere which varies very slowly on the time scale of months. The oscillations of the monsoon perturb the mean convective intensity potential causing local enhancements of the gradient. These perturbations are caused by the hydrological feedbacks, discussed above, or by the modification of the air-sea fluxes caused by variations of the low level wind during convective events. The final result is the slow northward propagation of convection within an even slower convective regime. The ECMWF analyses show very similar behavior of the convective intensity potential. Although it is considered premature to use the model to conduct simulations of the African monsoon system, the ECMWF analysis indicates similar behavior in the convective intensity potential suggesting, at least, that the same processes control the low frequency structure of the African monsoon. The implications of the hypotheses on numerical weather prediction of monsoon phenomenon are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A von Mises truss with stochastically varying material properties is investigated for snapthrough instability. The variability of the snap-through load is calculated analytically as a function of the material property variability represented as a stochastic process. The bounds are established which are independent of the knowledge of the complete description of correlation structure which is seldom possible using the experimental data. Two processes are considered to represent the material property variability and the results are presented graphically. Ein von Mises Fachwerk mit stochastisch verteilten Materialeigenschaften wird bezüglich der Durchschlagsinstabilität untersucht. Die Spannbreite der Durchschlagslast wird analytisch als Funktion der Spannbreite der Materialeigenschaften berechnet, die stochastisch verteilt angenommen werden. Eine explizite Gesamtbeschreibung der Struktur ist bei Benutzung experimenteller Daten selten möglich. Deshalb werden Grenzen für die Durchschlagskraft entwickelt, die von der Kenntnis dieser Gesamtbeschreibung unabhängig sind. Zwei Grenzfälle werden betrachtet, um die Spannbreite der Materialeigenschaften darzustellen. Die Ergebnisse werden grafisch dargestellt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of uncertainties on performance predictions of a helicopter is studied in this article. The aeroelastic parameters such as the air density, blade profile drag coefficient, main rotor angular velocity, main rotor radius, and blade chord are considered as uncertain variables. The propagation of these uncertainties in the performance parameters such as thrust coefficient, figure of merit, induced velocity, and power required are studied using Monte Carlo simulation and the first-order reliability method. The Rankine-Froude momentum theory is used for performance prediction in hover, axial climb, and forward flight. The propagation of uncertainty causes large deviations from the baseline deterministic predictions, which undoubtedly affect both the achievable performance and the safety of the helicopter. The numerical results in this article provide useful bounds on helicopter power requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is huge knowledge gap in our understanding of many terrestrial carbon cycle processes. In this paper, we investigate the bounds on terrestrial carbon uptake over India that arises solely due to CO (2) -fertilization. For this purpose, we use a terrestrial carbon cycle model and consider two extreme scenarios: unlimited CO2-fertilization is allowed for the terrestrial vegetation with CO2 concentration level at 735 ppm in one case, and CO2-fertilization is capped at year 1975 levels for another simulation. Our simulations show that, under equilibrium conditions, modeled carbon stocks in natural potential vegetation increase by 17 Gt-C with unlimited fertilization for CO2 levels and climate change corresponding to the end of 21st century but they decline by 5.5 Gt-C if fertilization is limited at 1975 levels of CO2 concentration. The carbon stock changes are dominated by forests. The area covered by natural potential forests increases by about 36% in the unlimited fertilization case but decreases by 15% in the fertilization-capped case. Thus, the assumption regarding CO2-fertilization has the potential to alter the sign of terrestrial carbon uptake over India. Our model simulations also imply that the maximum potential terrestrial sequestration over India, under equilibrium conditions and best case scenario of unlimited CO2-fertilization, is only 18% of the 21st century SRES A2 scenarios emissions from India. The limited uptake potential of the natural potential vegetation suggests that reduction of CO2 emissions and afforestation programs should be top priorities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen storage in the three-dimensional carbon foams is analyzed using classical grand canonical Monte Carlo simulations. The calculated storage capacities of the foams meet the material-based DOE targets and are comparable to the capacities of a bundle of well-separated similar diameter open nanotubes. The pore sizes in the foams are optimized for the best hydrogen uptake. The capacity depends sensitively on the C-H-2 interaction potential, and therefore, the results are presented for its ``weak'' and ``strong'' choices, to offer the lower and upper bounds for the expected capacities. Furthermore, quantum effects on the effective C-H-2 as well as H-2-H-2 interaction potentials are considered. We find that the quantum effects noticeably change the adsorption properties of foams and must be accounted for even at room temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presented here, in a vector formulation, is an O(mn2) direct concise algorithm that prunes/identifies the linearly dependent (ld) rows of an arbitrary m X n matrix A and computes its reflexive type minimum norm inverse A(mr)-, which will be the true inverse A-1 if A is nonsingular and the Moore-Penrose inverse A+ if A is full row-rank. The algorithm, without any additional computation, produces the projection operator P = (I - A(mr)- A) that provides a means to compute any of the solutions of the consistent linear equation Ax = b since the general solution may be expressed as x = A(mr)+b + Pz, where z is an arbitrary vector. The rank r of A will also be produced in the process. Some of the salient features of this algorithm are that (i) the algorithm is concise, (ii) the minimum norm least squares solution for consistent/inconsistent equations is readily computable when A is full row-rank (else, a minimum norm solution for consistent equations is obtainable), (iii) the algorithm identifies ld rows, if any, and reduces concerned computation and improves accuracy of the result, (iv) error-bounds for the inverse as well as the solution x for Ax = b are readily computable, (v) error-free computation of the inverse, solution vector, rank, and projection operator and its inherent parallel implementation are straightforward, (vi) it is suitable for vector (pipeline) machines, and (vii) the inverse produced by the algorithm can be used to solve under-/overdetermined linear systems.