11 resultados para Mathématiques

em DI-fusion - The institutional repository of Université Libre de Bruxelles


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of achieving super-resolution, i.e. resolution beyond the classical Rayleigh distance of half a wavelength, is a real challenge in several imaging problems. The development of computer-assisted instruments and the possibility of inverting the recorded data has clearly modified the traditional concept of resolving power of an instrument. We show that, in the framework of inverse problem theory, the achievable resolution limit arises no longer from a universal rule but instead from a practical limitation due to noise amplification in the data inversion process. We analyze under what circumstances super-resolution can be achieved and we show how to assess the actual resolution limits in a given experiment, as a function of the noise level and of the available a priori knowledge about the object function. We emphasize the importance of the a priori knowledge of its effective support and we show that significant super-resolution can be achieved for "subwavelength sources", i.e. objects which are smaller than the probing wavelength.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the problem of introducing consistent self-couplings in free theories for mixed tensor gauge fields whose symmetry properties are characterized by Young diagrams made of two columns of arbitrary (but different) lengths. We prove that, in flat space, these theories admit no local, Poincaré-invariant, smooth, selfinteracting deformation with at most two derivatives in the Lagrangian. Relaxing the derivative and Lorentz-invariance assumptions, there still is no deformation that modifies the gauge algebra, and in most cases no deformation that alters the gauge transformations. Our approach is based on a Becchi-Rouet-Stora-iyutin (BRST) -cohomology deformation procedure. © 2005 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whereas the resolving power of an ordinary optical microscope is determined by the classical Rayleigh distance, significant super-resolution, i.e. resolution improvement beyond that Rayleigh limit, has been achieved by confocal scanning light microscopy. Furthermore is has been shown that the resolution of a confocal scanning microscope can still be significantly enhanced by measuring, for each scanning position, the full diffraction image by means of an array of detectors and by inverting these data to recover the value of the object at the focus. We discuss the associated inverse problem and show how to generalize the data inversion procedure by allowing, for reconstructing the object at a given point, to make use also of the diffraction images recorded at other scanning positions. This leads us to a whole family of generalized inversion formulae, which contains as special cases some previously known formulae. We also show how these exact inversion formulae can be implemented in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A singular perturbation method is applied to a non-conservative system of two weakly coupled strongly nonlinear non-identical oscillators. For certain parameters, localized solutions exist for which the amplitude of one oscillator is an order of magnitude smaller than the other. It is shown that these solutions are described by coupled equations for the phase difference and scaled amplitudes. Three types of localized solutions are obtained as solutions to these equations which correspond to phase locking, phase drift, and phase entrainment. Quantitative results for the phases and amplitudes of the oscillators and the stability of these phenomena are expressed in terms of the parameters of the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the problems of object restoration and image extrapolation, according to the regularization theory of improperly posed problems. In order to take into account the stochastic nature of the noise and to introduce the main concepts of information theory, great attention is devoted to the probabilistic methods of regularization. The kind of the restored continuity is investigated in detail; in particular we prove that, while the image extrapolation presents a Hölder type stability, the object restoration has only a logarithmic continuity. © 1979 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SCOPUS: ed.j

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An 18 module Ceienkov detector with a total sensitive area of 2.3 m2 having silica aerogel as radiator is being tested in a particle beam at CERN PS. The modules having a sensitive area of 23 X 55 cm2 give typically a Cerenkov signal for (3= 1 particles of 12 photoelectrons for silica aerogel of refractive index 1.03 and a thickness of 15 cm. © 1981 IOP Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extended formulation of a polyhedron P is a linear description of a polyhedron Q together with a linear map π such that π(Q)=P. These objects are of fundamental importance in polyhedral combinatorics and optimization theory, and the subject of a number of studies. Yannakakis’ factorization theorem (Yannakakis in J Comput Syst Sci 43(3):441–466, 1991) provides a surprising connection between extended formulations and communication complexity, showing that the smallest size of an extended formulation of $$P$$P equals the nonnegative rank of its slack matrix S. Moreover, Yannakakis also shows that the nonnegative rank of S is at most 2c, where c is the complexity of any deterministic protocol computing S. In this paper, we show that the latter result can be strengthened when we allow protocols to be randomized. In particular, we prove that the base-2 logarithm of the nonnegative rank of any nonnegative matrix equals the minimum complexity of a randomized communication protocol computing the matrix in expectation. Using Yannakakis’ factorization theorem, this implies that the base-2 logarithm of the smallest size of an extended formulation of a polytope P equals the minimum complexity of a randomized communication protocol computing the slack matrix of P in expectation. We show that allowing randomization in the protocol can be crucial for obtaining small extended formulations. Specifically, we prove that for the spanning tree and perfect matching polytopes, small variance in the protocol forces large size in the extended formulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we extend recent results of Fiorini et al. on the extension complexity of the cut polytope and related polyhedra. We first describe a lifting argument to show exponential extension complexity for a number of NP-complete problems including subset-sum and three dimensional matching. We then obtain a relationship between the extension complexity of the cut polytope of a graph and that of its graph minors. Using this we are able to show exponential extension complexity for the cut polytope of a large number of graphs, including those used in quantum information and suspensions of cubic planar graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides an agent-based software exploration of the wellknown free market efficiency/equality trade-off. Our study simulates the interaction of agents producing, trading and consuming goods in the presence of different market structures, and looks at how efficient the producers/consumers mapping turn out to be as well as the resulting distribution of welfare among agents at the end of an arbitrarily large number of iterations. Two market mechanisms are compared: the competitive market (a double auction market in which agents outbid each other in order to buy and sell products) and the random one (in which products are allocated randomly). Our results confirm that the superior efficiency of the competitive market (an effective and never stopping producers/consumers mapping and a superior aggregative welfare) comes at a very high price in terms of inequality (above all when severe budget constraints are in play).