565 resultados para Conjecture de Yau
Resumo:
Helmke et al. have recently given a formula for the number of reachable pairs of matrices over a finite field. We give a new and elementary proof of the same formula by solving the equivalent problem of determining the number of so called zero kernel pairs over a finite field. We show that the problem is, equivalent to certain other enumeration problems and outline a connection with some recent results of Guo and Yang on the natural density of rectangular unimodular matrices over F-qx]. We also propose a new conjecture on the density of unimodular matrix polynomials. (C) 2016 Elsevier Inc. All rights reserved.
Resumo:
Injection and combustion of vaporized kerosene was experimentally investigated in a Mach 2.5 model combustor at various fuel temperatures and injection pressures. A unique kerosene heating and delivery system, which can prepare heated kerosene up to 820 K at a pressure of 5.5 MPa with negligible fuel coking, was developed. A three-species surrogate was employed to simulate the thermophysical properties of kerosene. The calculated thermophysical properties of surrogate provided insight into the fuel flow control in experiments. Kerosene jet structures at various preheat temperatures injecting into both quiescent environment and a Mach 2.5 crossflow were characterized. It was shown that the use ofvaporized kerosene injection holds the potential of enhancing fuel-air mixing and promoting overall burning. Supersonic combustion tests further confirmed the preceding conjecture by comparing the combustor performances of supercritical kerosene with those of liquid kerosene and effervescent atomization with hydrogen barbotage. Under the similar flow conditions and overall kerosene equivalence ratios, experimental results illustrated that the combustion efficiency of supercritical kerosene increased approximately 10-15% over that of liquid kerosene, which was comparable to that of effervescent atomization.
Resumo:
Injection and combustion of vaporized kerosene was experimentally investigated in a Mach 2.5 model combustor at various fuel temperatures and injection pressures. A unique kerosene heating and delivery system, which can prepare heated kerosene up to 820 K at a pressure of 5.5 MPa with negligible fuel coking, was developed. A three-species surrogate was employed to simulate the thermophysical properties of kerosene. The calculated thermophysical properties of surrogate provided insight into the fuel flow control in experiments. Kerosene jet structures at various preheat temperatures injecting into both quiescent environment and a Mach 2.5 crossflow were characterized. It was shown that the use ofvaporized kerosene injection holds the potential of enhancing fuel-air mixing and promoting overall burning. Supersonic combustion tests further confirmed the preceding conjecture by comparing the combustor performances of supercritical kerosene with those of liquid kerosene and effervescent atomization with hydrogen barbotage. Under the similar flow conditions and overall kerosene equivalence ratios, experimental results illustrated that the combustion efficiency of supercritical kerosene increased approximately 10-15% over that of liquid kerosene, which was comparable to that of effervescent atomization.
Resumo:
During the 18th Annual 2008 SAIL meeting at the Smithsonian Tropical Research Institute in Panama, Vielka Chang-Yau, librarian, mentioned the need to digitize and make available through the Aquatic Commons some of the early documents related to the U.S. biological survey of Panama from 1910 to 1912. With the assistance of SAIL, a regional marine librarian’s group, a digital project developed and this select bibliography represents the sources used for the project. It will assist researchers and librarians in finding online open access documents written during the construction of the Panama Canal, specifically between 1910-1912. As the project progressed, other items covering the region and its biological diversity were discovered and included. The project team expects that the coverage will continue to expand over time. (PDF contains 9 pages)
Resumo:
This thesis consists of three separate studies of roles that black holes might play in our universe.
In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.
In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.
In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.
Resumo:
Let l be any odd prime, and ζ a primitive l-th root of unity. Let C_l be the l-Sylow subgroup of the ideal class group of Q(ζ). The Teichmüller character w : Z_l → Z^*_l is given by w(x) = x (mod l), where w(x) is a p-1-st root of unity, and x ∈ Z_l. Under the action of this character, C_l decomposes as a direct sum of C^((i))_l, where C^((i))_l is the eigenspace corresponding to w^i. Let the order of C^((3))_l be l^h_3). The main result of this thesis is the following: For every n ≥ max( 1, h_3 ), the equation x^(ln) + y^(ln) + z^(ln) = 0 has no integral solutions (x,y,z) with l ≠ xyz. The same result is also proven with n ≥ max(1,h_5), under the assumption that C_l^((5)) is a cyclic group of order l^h_5. Applications of the methods used to prove the above results to the second case of Fermat's last theorem and to a Fermat-like equation in four variables are given.
The proof uses a series of ideas of H.S. Vandiver ([Vl],[V2]) along with a theorem of M. Kurihara [Ku] and some consequences of the proof of lwasawa's main conjecture for cyclotomic fields by B. Mazur and A. Wiles [MW]. In [V1] Vandiver claimed that the first case of Fermat's Last Theorem held for l if l did not divide the class number h^+ of the maximal real subfield of Q(e^(2πi/i)). The crucial gap in Vandiver's attempted proof that has been known to experts is explained, and complete proofs of all the results used from his papers are given.
Resumo:
The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
For a toric Del Pezzo surface S, a new instance of mirror symmetry, said relative, is introduced and developed. On the A-model, this relative mirror symmetry conjecture concerns genus 0 relative Gromov-Witten of maximal tangency of S. These correspond, on the B-model, to relative periods of the mirror to S. Furthermore, for S not necessarily toric, two conjectures for BPS state counts are related. It is proven that the integrality of BPS state counts of the total space of the canonical bundle on S implies the integrality for the relative BPS state counts of S. Finally, a prediction of homological mirror symmetry for the open complement is explored. The B-model prediction is calculated in all cases and matches the known A-model computation for the projective plane.
Resumo:
A classical question in combinatorics is the following: given a partial Latin square $P$, when can we complete $P$ to a Latin square $L$? In this paper, we investigate the class of textbf{$epsilon$-dense partial Latin squares}: partial Latin squares in which each symbol, row, and column contains no more than $epsilon n$-many nonblank cells. Based on a conjecture of Nash-Williams, Daykin and H"aggkvist conjectured that all $frac{1}{4}$-dense partial Latin squares are completable. In this paper, we will discuss the proof methods and results used in previous attempts to resolve this conjecture, introduce a novel technique derived from a paper by Jacobson and Matthews on generating random Latin squares, and use this novel technique to study $ epsilon$-dense partial Latin squares that contain no more than $delta n^2$ filled cells in total.
In Chapter 2, we construct completions for all $ epsilon$-dense partial Latin squares containing no more than $delta n^2$ filled cells in total, given that $epsilon < frac{1}{12}, delta < frac{ left(1-12epsilonright)^{2}}{10409}$. In particular, we show that all $9.8 cdot 10^{-5}$-dense partial Latin squares are completable. In Chapter 4, we augment these results by roughly a factor of two using some probabilistic techniques. These results improve prior work by Gustavsson, which required $epsilon = delta leq 10^{-7}$, as well as Chetwynd and H"aggkvist, which required $epsilon = delta = 10^{-5}$, $n$ even and greater than $10^7$.
If we omit the probabilistic techniques noted above, we further show that such completions can always be found in polynomial time. This contrasts a result of Colbourn, which states that completing arbitrary partial Latin squares is an NP-complete task. In Chapter 3, we strengthen Colbourn's result to the claim that completing an arbitrary $left(frac{1}{2} + epsilonright)$-dense partial Latin square is NP-complete, for any $epsilon > 0$.
Colbourn's result hinges heavily on a connection between triangulations of tripartite graphs and Latin squares. Motivated by this, we use our results on Latin squares to prove that any tripartite graph $G = (V_1, V_2, V_3)$ such that begin{itemize} item $|V_1| = |V_2| = |V_3| = n$, item For every vertex $v in V_i$, $deg_+(v) = deg_-(v) geq (1- epsilon)n,$ and item $|E(G)| > (1 - delta)cdot 3n^2$ end{itemize} admits a triangulation, if $epsilon < frac{1}{132}$, $delta < frac{(1 -132epsilon)^2 }{83272}$. In particular, this holds when $epsilon = delta=1.197 cdot 10^{-5}$.
This strengthens results of Gustavsson, which requires $epsilon = delta = 10^{-7}$.
In an unrelated vein, Chapter 6 explores the class of textbf{quasirandom graphs}, a notion first introduced by Chung, Graham and Wilson cite{chung1989quasi} in 1989. Roughly speaking, a sequence of graphs is called "quasirandom"' if it has a number of properties possessed by the random graph, all of which turn out to be equivalent. In this chapter, we study possible extensions of these results to random $k$-edge colorings, and create an analogue of Chung, Graham and Wilson's result for such colorings.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.
Resumo:
The emphasis in reactor physics research has shifted toward investigations of fast reactors. The effects of high energy neutron processes have thus become fundamental to our understanding, and one of the most important of these processes is nuclear inelastic scattering. In this research we include inelastic scattering as a primary energy transfer mechanism, and study the resultant neutron energy spectrum in an infinite medium. We assume that the moderator material has a high mass number, so that in a laboratory coordinate system the energy loss of an inelastically scattered neutron may be taken as discrete. It is then consistent to treat elastic scattering with an age theory expansion. Mathematically these assumptions lead to balance equations of the differential-difference type.
The steady state problem is explored first by way of Laplace transformation of the energy variable. We then develop another steady state technique, valid for multiple inelastic level excitations, which depends on the level structure satisfying a physically reasonable constraint. In all cases the solutions we generate are compared with results obtained by modeling inelastic scattering with a separable, evaporative kernel.
The time dependent problem presents some new difficulties. By modeling the elastic scattering cross section in a particular way, we generate solutions to this more interesting problem. We conjecture the method of characteristics may be useful in analyzing time dependent problems with general cross sections. These ideas are briefly explored.
Resumo:
We will prove that, for a 2 or 3 component L-space link, HFL- is completely determined by the multi-variable Alexander polynomial of all the sub-links of L, as well as the pairwise linking numbers of all the components of L. We will also give some restrictions on the multi-variable Alexander polynomial of an L-space link. Finally, we use the methods in this paper to prove a conjecture of Yajing Liu classifying all 2-bridge L-space links.
Resumo:
This thesis studies Frobenius traces in Galois representations from two different directions. In the first problem we explore how often they vanish in Artin-type representations. We give an upper bound for the density of the set of vanishing Frobenius traces in terms of the multiplicities of the irreducible components of the adjoint representation. Towards that, we construct an infinite family of representations of finite groups with an irreducible adjoint action.
In the second problem we partially extend for Hilbert modular forms a result of Coleman and Edixhoven that the Hecke eigenvalues ap of classical elliptical modular newforms f of weight 2 are never extremal, i.e., ap is strictly less than 2[square root]p. The generalization currently applies only to prime ideals p of degree one, though we expect it to hold for p of any odd degree. However, an even degree prime can be extremal for f. We prove our result in each of the following instances: when one can move to a Shimura curve defined by a quaternion algebra, when f is a CM form, when the crystalline Frobenius is semi-simple, and when the strong Tate conjecture holds for a product of two Hilbert modular surfaces (or quaternionic Shimura surfaces) over a finite field.
Resumo:
In Part I, we construct a symmetric stress-energy-momentum pseudo-tensor for the gravitational fields of Brans-Dicke theory, and use this to establish rigorously conserved integral expressions for energy-momentum Pi and angular momentum Jik. Application of the two-dimensional surface integrals to the exact static spherical vacuum solution of Brans leads to an identification of our conserved mass with the active gravitational mass. Application to the distant fields of an arbitrary stationary source reveals that Pi and Jik have the same physical interpretation as in general relativity. For gravitational waves whose wavelength is small on the scale of the background radius of curvature, averaging over several wavelengths in the Brill-Hartle-Isaacson manner produces a stress-energy-momentum tensor for gravitational radiation which may be used to calculate the changes in Pi and Jik of their source.
In Part II, we develop strong evidence in favor of a conjecture by Penrose--that, in the Brans-Dicke theory, relativistic gravitational collapse in three dimensions produce black holes identical to those of general relativity. After pointing out that any black hole solution of general relativity also satisfies Brans-Dicke theory, we establish the Schwarzschild and Kerr geometries as the only possible spherical and axially symmetric black hole exteriors, respectively. Also, we show that a Schwarzschild geometry is necessarily formed in the collapse of an uncharged sphere.
Appendices discuss relationships among relativistic gravity theories and an example of a theory in which black holes do not exist.