979 resultados para classical summation theorems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

MSC 2010: 33C20

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an envelope theorem for establishing first-order conditions in decision problems involving continuous and discrete choices. Our theorem accommodates general dynamic programming problems, even with unbounded marginal utilities. And, unlike classical envelope theorems that focus only on differentiating value functions, we accommodate other endogenous functions such as default probabilities and interest rates. Our main technical ingredient is how we establish the differentiability of a function at a point: we sandwich the function between two differentiable functions from above and below. Our theory is widely applicable. In unsecured credit models, neither interest rates nor continuation values are globally differentiable. Nevertheless, we establish an Euler equation involving marginal prices and values. In adjustment cost models, we show that first-order conditions apply universally, even if optimal policies are not (S,s). Finally, we incorporate indivisible choices into a classic dynamic insurance analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel method is presented for obtaining rigorous upper bounds on the finite-amplitude growth of instabilities to parallel shear flows on the beta-plane. The method relies on the existence of finite-amplitude Liapunov (normed) stability theorems, due to Arnol'd, which are nonlinear generalizations of the classical stability theorems of Rayleigh and Fjørtoft. Briefly, the idea is to use the finite-amplitude stability theorems to constrain the evolution of unstable flows in terms of their proximity to a stable flow. Two classes of general bounds are derived, and various examples are considered. It is also shown that, for a certain kind of forced-dissipative problem with dissipation proportional to vorticity, the finite-amplitude stability theorems (which were originally derived for inviscid, unforced flow) remain valid (though they are no longer strictly Liapunov); the saturation bounds therefore continue to hold under these conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the asset pricing implications of an endowment economy when agents can default on contracts that would leave them otherwise worse off. We specialize and extend the environment studied by Kocherlakota (1995) and Kehoe and Levine (1993) to make it comparable to standard studies of asset pricillg. We completely charactize efficient allocations for several special cases. We illtroduce a competitive equilibrium with complete markets alld with elldogellous solvency constraints. These solvellcy constraints are such as to prevent default -at the cost of reduced risk sharing. We show a version of the classical welfare theorems for this equilibrium definition. We characterize the pricing kernel, alld compare it with the one for economies without participation constraints : interest rates are lower and risk premia can be bigger depending on the covariance of the idiosyncratic and aggregate shocks. Quantitative examples show that for reasonable parameter values the relevant marginal rates of substitution fali within the Hansen-Jagannathan bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a general stochastic rumour model in which an ignorant individual has a certain probability of becoming a stifler immediately upon hearing the rumour. We refer to this special kind of stifler as an uninterested individual. Our model also includes distinct rates for meetings between two spreaders in which both become stiflers or only one does, so that particular cases are the classical Daley-Kendall and Maki-Thompson models. We prove a Law of Large Numbers and a Central Limit Theorem for the proportions of those who ultimately remain ignorant and those who have heard the rumour but become uninterested in it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'any 1994, Astala publicà el reconegut teorema de distorió de l'àrea per aplicacions quasiconformes, un resultat innovador que va permetre que n'apareguessin nombrosos més dins d'aquest camp de l'anàlisi durant la darrera dècada. Ens centrem en les conseqüències que té en la distorsió de la mesura de Hausdorff. Seguim la demostració de Lacey, Sawyer i Uriarte-Tuero per la distorsió del contingut de Hausdorff, clarificant-ne alguns punts i canviant l'enfocament per l'acotació de la transformada de Beurling, on prenem les idees d'Astala, Clop, Tolsa, Uriarte-Tuero i Verdera.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans ce mémoire, nous nous pencherons tout particulièrement sur une primitive cryptographique connue sous le nom de partage de secret. Nous explorerons autant le domaine classique que le domaine quantique de ces primitives, couronnant notre étude par la présentation d’un nouveau protocole de partage de secret quantique nécessitant un nombre minimal de parts quantiques c.-à-d. une seule part quantique par participant. L’ouverture de notre étude se fera par la présentation dans le chapitre préliminaire d’un survol des notions mathématiques sous-jacentes à la théorie de l’information quantique ayant pour but primaire d’établir la notation utilisée dans ce manuscrit, ainsi que la présentation d’un précis des propriétés mathématique de l’état de Greenberger-Horne-Zeilinger (GHZ) fréquemment utilisé dans les domaines quantiques de la cryptographie et des jeux de la communication. Mais, comme nous l’avons mentionné plus haut, c’est le domaine cryptographique qui restera le point focal de cette étude. Dans le second chapitre, nous nous intéresserons à la théorie des codes correcteurs d’erreurs classiques et quantiques qui seront à leur tour d’extrême importances lors de l’introduction de la théorie quantique du partage de secret dans le chapitre suivant. Dans la première partie du troisième chapitre, nous nous concentrerons sur le domaine classique du partage de secret en présentant un cadre théorique général portant sur la construction de ces primitives illustrant tout au long les concepts introduits par des exemples présentés pour leurs intérêts autant historiques que pédagogiques. Ceci préparera le chemin pour notre exposé sur la théorie quantique du partage de secret qui sera le focus de la seconde partie de ce même chapitre. Nous présenterons alors les théorèmes et définitions les plus généraux connus à date portant sur la construction de ces primitives en portant un intérêt particulier au partage quantique à seuil. Nous montrerons le lien étroit entre la théorie quantique des codes correcteurs d’erreurs et celle du partage de secret. Ce lien est si étroit que l’on considère les codes correcteurs d’erreurs quantiques étaient de plus proches analogues aux partages de secrets quantiques que ne leur étaient les codes de partage de secrets classiques. Finalement, nous présenterons un de nos trois résultats parus dans A. Broadbent, P.-R. Chouha, A. Tapp (2009); un protocole sécuritaire et minimal de partage de secret quantique a seuil (les deux autres résultats dont nous traiterons pas ici portent sur la complexité de la communication et sur la simulation classique de l’état de GHZ).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is to look the effect of change in the ordering of the Fourier system on Szegö’s classical observations of asymptotic distribution of eigenvalues of finite Toeplitz forms.This is done by checking proofs and Szegö’s properties in the new set up.The Fourier system is unconditional [19], any arbitrary ordering of the Fourier system forms a basis for the Hilbert space L2 [-Π, Π].Here study about the classical Szegö’s theorem.Szegö’s type theorem for operators in L2(R+) and check its validity for certain multiplication operators.Since the trigonometric basis is not available in L2(R+) or in L2(R) .This study discussed about the classes of orderings of Haar System in L2 (R+) and in L2(R) in which Szegö’s Type TheoreT Am is valid for certain multiplication operators.It is divided into two sections. In the first section there is an ordering to Haar system in L2(R+) and prove that with respect to this ordering, Szegö’s Type theorem holds for general class of multiplication operators Tƒ with multiplier ƒ ε L2(R+), subject to some conditions on ƒ.Finally in second section more general classes of ordering of Haar system in L2(R+) and in L2(R) are identified in such a way that for certain classes of multiplication operators the asymptotic distribution of eigenvalues exists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we have mainly achieved the following: 1. we provide a review of the main methods used for the computation of the connection and linearization coefficients between orthogonal polynomials of a continuous variable, moreover using a new approach, the duplication problem of these polynomial families is solved; 2. we review the main methods used for the computation of the connection and linearization coefficients of orthogonal polynomials of a discrete variable, we solve the duplication and linearization problem of all orthogonal polynomials of a discrete variable; 3. we propose a method to generate the connection, linearization and duplication coefficients for q-orthogonal polynomials; 4. we propose a unified method to obtain these coefficients in a generic way for orthogonal polynomials on quadratic and q-quadratic lattices. Our algorithmic approach to compute linearization, connection and duplication coefficients is based on the one used by Koepf and Schmersau and on the NaViMa algorithm. Our main technique is to use explicit formulas for structural identities of classical orthogonal polynomial systems. We find our results by an application of computer algebra. The major algorithmic tools for our development are Zeilberger’s algorithm, q-Zeilberger’s algorithm, the Petkovšek-van-Hoeij algorithm, the q-Petkovšek-van-Hoeij algorithm, and Algorithm 2.2, p. 20 of Koepf's book "Hypergeometric Summation" and it q-analogue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il contenuto fisico della Relatività Generale è espresso dal Principio di Equivalenza, che sancisce l'equivalenza di geometria e gravitazione. La teoria predice l'esistenza dei buchi neri, i più semplici oggetti macroscopici esistenti in natura: essi sono infatti descritti da pochi parametri, le cui variazioni obbediscono a leggi analoghe a quelle della termodinamica. La termodinamica dei buchi neri è posta su basi solide dalla meccanica quantistica, mediante il fenomeno noto come radiazione di Hawking. Questi risultati gettano una luce su una possibile teoria quantistica della gravitazione, ma ad oggi una simile teoria è ancora lontana. In questa tesi ci proponiamo di studiare i buchi neri nei loro aspetti sia classici che quantistici. I primi due capitoli sono dedicati all'esposizione dei principali risultati raggiunti in ambito teorico: in particolare ci soffermeremo sui singularity theorems, le leggi della meccanica dei buchi neri e la radiazione di Hawking. Il terzo capitolo, che estende la discussione sulle singolarità, espone la teoria dei buchi neri non singolari, pensati come un modello effettivo di rimozione delle singolarità. Infine il quarto capitolo esplora le ulteriori conseguenze della meccanica quantistica sulla dinamica dei buchi neri, mediante l'uso della nozione di entropia di entanglement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contrast sensitivity is better with two eyes than one. The standard view is that thresholds are about 1.4 (v2) times better with two eyes, and that this arises from monocular responses that, near threshold, are proportional to the square of contrast, followed by binocular summation of the two monocular signals. However, estimates of the threshold ratio in the literature vary from about 1.2 to 1.9, and many early studies had methodological weaknesses. We collected extensive new data, and applied a general model of binocular summation to interpret the threshold ratio. We used horizontal gratings (0.25 - 4 cycles deg-1) flickering sinusoidally (1 - 16 Hz), presented to one or both eyes through frame-alternating ferroelectric goggles with negligible cross-talk, and used a 2AFC staircase method to estimate contrast thresholds and psychometric slopes. Four naive observers completed 20 000 trials each, and their mean threshold ratios were 1.63, 1.69, 1.71, 1.81 - grand mean 1.71 - well above the classical v2. Mean ratios tended to be slightly lower (~1.60) at low spatial or high temporal frequencies. We modelled contrast detection very simply by assuming a single binocular mechanism whose response is proportional to (Lm + Rm) p, followed by fixed additive noise, where L,R are contrasts in the left and right eyes, and m, p are constants. Contrast-gain-control effects were assumed to be negligible near threshold. On this model the threshold ratio is 2(?1/m), implying that m=1.3 on average, while the Weibull psychometric slope (median 3.28) equals 1.247mp, yielding p=2.0. Together, the model and data suggest that, at low contrasts across a wide spatiotemporal frequency range, monocular pathways are nearly linear in their contrast response (m close to 1), while a strongly accelerating nonlinearity (p=2, a 'soft threshold') occurs after binocular summation. [Supported by EPSRC project grant GR/S74515/01]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision must analyze the retinal image over both small and large areas to represent fine-scale spatial details and extensive textures. The long-range neuronal convergence that this implies might lead us to expect that contrast sensitivity should improve markedly with the contrast area of the image. But this is at odds with the orthodox view that contrast sensitivity is determined merely by probability summation over local independent detectors. To address this puzzle, I aimed to assess the summation of luminance contrast without the confounding influence of area-dependent internal noise. I measured contrast detection thresholds for novel Battenberg stimuli that had identical overall dimensions (to clamp the aggregation of noise) but were constructed from either dense or sparse arrays of micro-patterns. The results unveiled a three-stage visual hierarchy of contrast summation involving (i) spatial filtering, (ii) long-range summation of coherent textures, and (iii) pooling across orthogonal textures. Linear summation over local energy detectors was spatially extensive (as much as 16 cycles) at Stage 2, but the resulting model is also consistent with earlier classical results of contrast summation (J. G. Robson & N. Graham, 1981), where co-aggregation of internal noise has obscured these long-range interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we prove results about Riesz summability of classical Laguerre series, locally uniformly or on the Lebesgue set of the function f such that (∫(1 + x)^(mp) |f(x)|^p dx )^(1/p) < ∞, for some p and m satisfying 1 ≤ p ≤ ∞, −∞ < m < ∞.