11 resultados para Plancherel Theorem
em Brock University, Canada
Resumo:
This work consists of a theoretical part and an experimental one. The first part provides a simple treatment of the celebrated von Neumann minimax theorem as formulated by Nikaid6 and Sion. It also discusses its relationships with fundamental theorems of convex analysis. The second part is about externality in sponsored search auctions. It shows that in these auctions, advertisers have externality effects on each other which influence their bidding behavior. It proposes Hal R.Varian model and shows how adding externality to this model will affect its properties. In order to have a better understanding of the interaction among advertisers in on-line auctions, it studies the structure of the Google advertisements networ.k and shows that it is a small-world scale-free network.
Resumo:
Dynamic logic is an extension of modal logic originally intended for reasoning about computer programs. The method of proving correctness of properties of a computer program using the well-known Hoare Logic can be implemented by utilizing the robustness of dynamic logic. For a very broad range of languages and applications in program veri cation, a theorem prover named KIV (Karlsruhe Interactive Veri er) Theorem Prover has already been developed. But a high degree of automation and its complexity make it di cult to use it for educational purposes. My research work is motivated towards the design and implementation of a similar interactive theorem prover with educational use as its main design criteria. As the key purpose of this system is to serve as an educational tool, it is a self-explanatory system that explains every step of creating a derivation, i.e., proving a theorem. This deductive system is implemented in the platform-independent programming language Java. In addition, a very popular combination of a lexical analyzer generator, JFlex, and the parser generator BYacc/J for parsing formulas and programs has been used.
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Resumo:
The frequency dependence of the electron-spin fluctuation spectrum, P(Q), is calculated in the finite bandwidth model. We find that for Pd, which has a nearly full d-band, the magnitude, the range, and the peak frequency of P(Q) are greatly reduced from those in the standard spin fluctuation theory. The electron self-energy due to spin fluctuations is calculated within the finite bandwidth model. Vertex corrections are examined, and we find that Migdal's theorem is valid for spin fluctuations in the nearly full band. The conductance of a normal metal-insulator-normal metal tunnel junction is examined when spin fluctuations are present in one electrode. We find that for the nearly full band, the momentum independent self-energy due to spin fluctuations enters the expression for the tunneling conductance with approximately the same weight as the self-energy due to phonons. The effect of spin fluctuations on the tunneling conductance is slight within the finite bandwidth model for Pd. The effect of spin fluctuations on the tunneling conductance of a metal with a less full d-band than Pd may be more pronounced. However, in this case the tunneling conductance is not simply proportional to the self-energy.
Resumo:
Abstract: Root and root finding are concepts familiar to most branches of mathematics. In graph theory, H is a square root of G and G is the square of H if two vertices x,y have an edge in G if and only if x,y are of distance at most two in H. Graph square is a basic operation with a number of results about its properties in the literature. We study the characterization and recognition problems of graph powers. There are algorithmic and computational approaches to answer the decision problem of whether a given graph is a certain power of any graph. There are polynomial time algorithms to solve this problem for square of graphs with girth at least six while the NP-completeness is proven for square of graphs with girth at most four. The girth-parameterized problem of root fining has been open in the case of square of graphs with girth five. We settle the conjecture that recognition of square of graphs with girth 5 is NP-complete. This result is providing the complete dichotomy theorem for square root finding problem.
Resumo:
According to the List Colouring Conjecture, if G is a multigraph then χ' (G)=χl' (G) . In this thesis, we discuss a relaxed version of this conjecture that every simple graph G is edge-(∆ + 1)-choosable as by Vizing’s Theorem ∆(G) ≤χ' (G)≤∆(G) + 1. We prove that if G is a planar graph without 7-cycles with ∆(G)≠5,6 , or without adjacent 4-cycles with ∆(G)≠5, or with no 3-cycles adjacent to 5-cycles, then G is edge-(∆ + 1)-choosable.
Resumo:
Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.
Resumo:
Let f(x) be a complex rational function. In this work, we study conditions under which f(x) cannot be written as the composition of two rational functions which are not units under the operation of function composition. In this case, we say that f(x) is prime. We give sufficient conditions for complex rational functions to be prime in terms of their degrees and their critical values, and we derive some conditions for the case of complex polynomials. We consider also the divisibility of integral polynomials, and we present a generalization of a theorem of Nieto. We show that if f(x) and g(x) are integral polynomials such that the content of g divides the content of f and g(n) divides f(n) for an integer n whose absolute value is larger than a certain bound, then g(x) divides f(x) in Z[x]. In addition, given an integral polynomial f(x), we provide a method to determine if f is irreducible over Z, and if not, find one of its divisors in Z[x].
Resumo:
Symmetry group methods are applied to obtain all explicit group-invariant radial solutions to a class of semilinear Schr¨odinger equations in dimensions n = 1. Both focusing and defocusing cases of a power nonlinearity are considered, including the special case of the pseudo-conformal power p = 4/n relevant for critical dynamics. The methods involve, first, reduction of the Schr¨odinger equations to group-invariant semilinear complex 2nd order ordinary differential equations (ODEs) with respect to an optimal set of one-dimensional point symmetry groups, and second, use of inherited symmetries, hidden symmetries, and conditional symmetries to solve each ODE by quadratures. Through Noether’s theorem, all conservation laws arising from these point symmetry groups are listed. Some group-invariant solutions are found to exist for values of n other than just positive integers, and in such cases an alternative two-dimensional form of the Schr¨odinger equations involving an extra modulation term with a parameter m = 2−n = 0 is discussed.
Resumo:
For inviscid fluid flow in any n-dimensional Riemannian manifold, new conserved vorticity integrals generalizing helicity, enstrophy, and entropy circulation are derived for lower-dimensional surfaces that move along fluid streamlines. Conditions are determined for which the integrals yield constants of motion for the fluid. In the case when an inviscid fluid is isentropic, these new constants of motion generalize Kelvin’s circulation theorem from closed loops to closed surfaces of any dimension.
Resumo:
Consider an undirected graph G and a subgraph of G, H. A q-backbone k-colouring of (G,H) is a mapping f: V(G) {1, 2, ..., k} such that G is properly coloured and for each edge of H, the colours of its endpoints differ by at least q. The minimum number k for which there is a backbone k-colouring of (G,H) is the backbone chromatic number, BBCq(G,H). It has been proved that backbone k-colouring of (G,T) is at most 4 if G is a connected C4-free planar graph or non-bipartite C5-free planar graph or Cj-free, j∈{6,7,8} planar graph without adjacent triangles. In this thesis we improve the results mentioned above and prove that 2-backbone k-colouring of any connected planar graphs without adjacent triangles is at most 4 by using a discharging method. In the second part of this thesis we further improve these results by proving that for any graph G with χ(G) ≥ 4, BBC(G,T) = χ(G). In fact, we prove the stronger result that a backbone tree T in G exists, such that ∀ uv ∈ T, |f(u)-f(v)|=2 or |f(u)-f(v)| ≥ k-2, k = χ(G). For the case that G is a planar graph, according to Four Colour Theorem, χ(G) = 4; so, BBC(G,T) = 4.