905 resultados para Search space reduction
Resumo:
In this thesis we study Galois representations corresponding to abelian varieties with certain reduction conditions. We show that these conditions force the image of the representations to be "big," so that the Mumford-Tate conjecture (:= MT) holds. We also prove that the set of abelian varieties satisfying these conditions is dense in a corresponding moduli space.
The main results of the thesis are the following two theorems.
Theorem A: Let A be an absolutely simple abelian variety, End° (A) = k : imaginary quadratic field, g = dim(A). Assume either dim(A) ≤ 4, or A has bad reduction at some prime ϕ, with the dimension of the toric part of the reduction equal to 2r, and gcd(r,g) = 1, and (r,g) ≠ (15,56) or (m -1, m(m+1)/2). Then MT holds.
Theorem B: Let M be the moduli space of abelian varieties with fixed polarization, level structure and a k-action. It is defined over a number field F. The subset of M(Q) corresponding to absolutely simple abelian varieties with a prescribed stable reduction at a large enough prime ϕ of F is dense in M(C) in the complex topology. In particular, the set of simple abelian varieties having bad reductions with fixed dimension of the toric parts is dense.
Besides this we also established the following results:
(1) MT holds for some other classes of abelian varieties with similar reduction conditions. For example, if A is an abelian variety with End° (A) = Q and the dimension of the toric part of its reduction is prime to dim( A), then MT holds.
(2) MT holds for Ribet-type abelian varieties.
(3) The Hodge and the Tate conjectures are equivalent for abelian 4-folds.
(4) MT holds for abelian 4-folds of type II, III, IV (Theorem 5.0(2)) and some 4-folds of type I.
(5) For some abelian varieties either MT or the Hodge conjecture holds.
Resumo:
A standard question in the study of geometric quantization is whether symplectic reduction interacts nicely with the quantized theory, and in particular whether “quantization commutes with reduction.” Guillemin and Sternberg first proposed this question, and answered it in the affirmative for the case of a free action of a compact Lie group on a compact Kähler manifold. Subsequent work has focused mainly on extending their proof to non-free actions and non-Kähler manifolds. For realistic physical examples, however, it is desirable to have a proof which also applies to non-compact symplectic manifolds.
In this thesis we give a proof of the quantization-reduction problem for general symplectic manifolds. This is accomplished by working in a particular wavefunction representation, associated with a polarization that is in some sense compatible with reduction. While the polarized sections described by Guillemin and Sternberg are nonzero on a dense subset of the Kähler manifold, the ones considered here are distributional, having support only on regions of the phase space associated with certain quantized, or “admissible”, values of momentum.
We first propose a reduction procedure for the prequantum geometric structures that “covers” symplectic reduction, and demonstrate how both symplectic and prequantum reduction can be viewed as examples of foliation reduction. Consistency of prequantum reduction imposes the above-mentioned admissibility conditions on the quantized momenta, which can be seen as analogues of the Bohr-Wilson-Sommerfeld conditions for completely integrable systems.
We then describe our reduction-compatible polarization, and demonstrate a one-to-one correspondence between polarized sections on the unreduced and reduced spaces.
Finally, we describe a factorization of the reduced prequantum bundle, suggested by the structure of the underlying reduced symplectic manifold. This in turn induces a factorization of the space of polarized sections that agrees with its usual decomposition by irreducible representations, and so proves that quantization and reduction do indeed commute in this context.
A significant omission from the proof is the construction of an inner product on the space of polarized sections, and a discussion of its behavior under reduction. In the concluding chapter of the thesis, we suggest some ideas for future work in this direction.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
Planetary atmospheres exist in a seemingly endless variety of physical and chemical environments. There are an equally diverse number of methods by which we can study and characterize atmospheric composition. In order to better understand the fundamental chemistry and physical processes underlying all planetary atmospheres, my research of the past four years has focused on two distinct topics. First, I focused on the data analysis and spectral retrieval of observations obtained by the Ultraviolet Imaging Spectrograph (UVIS) instrument onboard the Cassini spacecraft while in orbit around Saturn. These observations consisted of stellar occultation measurements of Titan's upper atmosphere, probing the chemical composition in the region 300 to 1500 km above Titan's surface. I examined the relative abundances of Titan's two most prevalent chemical species, nitrogen and methane. I also focused on the aerosols that are formed through chemistry involving these two major species, and determined the vertical profiles of aerosol particles as a function of time and latitude. Moving beyond our own solar system, my second topic of investigation involved analysis of infra-red light curves from the Spitzer space telescope, obtained as it measured the light from stars hosting planets of their own. I focused on both transit and eclipse modeling during Spitzer data reduction and analysis. In my initial work, I utilized the data to search for transits of planets a few Earth masses in size. In more recent research, I analyzed secondary eclipses of three exoplanets and constrained the range of possible temperatures and compositions of their atmospheres.
Resumo:
The prime thrust of this dissertation is to advance the development of fuel cell dioxygen reduction cathodes that employ some variant of multicopper oxidase enzymes as the catalyst. The low earth-abundance of platinum metal and its correspondingly high market cost has prompted a general search amongst chemists and materials scientists for reasonable alternatives to this metal for facilitating catalytic dioxygen reduction chemistry. The multicopper oxidases (MCOs), which constitute a class of enzyme that naturally catalyze the reaction O2 + 4H+ + 4e- → 2H2O, provide a promising set of biochemical contenders for fuel cell cathode catalysts. In MCOs, a substrate reduces a copper atom at the type 1 site, where charge is then transferred to a trinuclear copper cluster consisting of a mononuclear type 2 or “normal copper” site and a binuclear type 3 copper site. Following the reduction of all four copper atoms in the enzyme, dioxygen is then reduced to water in two two-electron steps, upon binding to the trinuclear copper cluster. We identified an MCO, a laccase from the hyperthermophilic bacterium Thermus thermophilus strain HB27, as a promising candidate for cathodic fuel cell catalysis. This protein demonstrates resilience at high temperatures, exhibiting no denaturing transition at temperatures high as 95°C, conditions relevant to typical polymer electrolyte fuel cell operation.
In Chapter I of this thesis, we discuss initial efforts to physically characterize the enzyme when operating as a heterogeneous cathode catalyst. Following this, in Chapter II we then outline the development of a model capable of describing the observed electrochemical behavior of this enzyme when operating on porous carbon electrodes. Developing a rigorous mathematical framework with which to describe this system had the potential to improve our understanding of MCO electrokinetics, while also providing a level of predictive power that might guide any future efforts to fabricate MCO cathodes with optimized electrochemical performance. In Chapter III we detail efforts to reduce electrode overpotentials through site-directed mutagenesis of the inner and outer-sphere ligands of the Cu sites in laccase, using electrochemical methods and electronic spectroscopy to try and understand the resultant behavior of our mutant constructs. Finally, in Chapter IV, we examine future work concerning the fabrication of enhanced MCO cathodes, exploring the possibility of new cathode materials and advanced enzyme deposition techniques.
Resumo:
While synoptic surveys in the optical and at high energies have revealed a rich discovery phase space of slow transients, a similar yield is still awaited in the radio. Majority of the past blind surveys, carried out with radio interferometers, have suffered from a low yield of slow transients, ambiguous transient classifications, and contamination by false positives. The newly-refurbished Karl G. Jansky Array (Jansky VLA) offers wider bandwidths for accurate RFI excision as well as substantially-improved sensitivity and survey speed compared with the old VLA. The Jansky VLA thus eliminates the pitfalls of interferometric transient search by facilitating sensitive, wide-field, and near-real-time radio surveys and enabling a systematic exploration of the dynamic radio sky. This thesis aims at carrying out blind Jansky VLA surveys for characterizing the radio variable and transient sources at frequencies of a few GHz and on timescales between days and years. Through joint radio and optical surveys, the thesis addresses outstanding questions pertaining to the rates of slow radio transients (e.g. radio supernovae, tidal disruption events, binary neutron star mergers, stellar flares, etc.), the false-positive foreground relevant for the radio and optical counterpart searches of gravitational wave sources, and the beaming factor of gamma-ray bursts. The need for rapid processing of the Jansky VLA data and near-real-time radio transient search has enabled the development of state-of-the-art software infrastructure. This thesis has successfully demonstrated the Jansky VLA as a powerful transient search instrument, and it serves as a pathfinder for the transient surveys planned for the SKA-mid pathfinder facilities, viz. ASKAP, MeerKAT, and WSRT/Apertif.
Resumo:
In a paper published in 1961, L. Cesari [1] introduces a method which extends certain earlier existence theorems of Cesari and Hale ([2] to [6]) for perturbation problems to strictly nonlinear problems. Various authors ([1], [7] to [15]) have now applied this method to nonlinear ordinary and partial differential equations. The basic idea of the method is to use the contraction principle to reduce an infinite-dimensional fixed point problem to a finite-dimensional problem which may be attacked using the methods of fixed point indexes.
The following is my formulation of the Cesari fixed point method:
Let B be a Banach space and let S be a finite-dimensional linear subspace of B. Let P be a projection of B onto S and suppose Г≤B such that pГ is compact and such that for every x in PГ, P-1x∩Г is closed. Let W be a continuous mapping from Г into B. The Cesari method gives sufficient conditions for the existence of a fixed point of W in Г.
Let I denote the identity mapping in B. Clearly y = Wy for some y in Г if and only if both of the following conditions hold:
(i) Py = PWy.
(ii) y = (P + (I - P)W)y.
Definition. The Cesari fixed paint method applies to (Г, W, P) if and only if the following three conditions are satisfied:
(1) For each x in PГ, P + (I - P)W is a contraction from P-1x∩Г into itself. Let y(x) be that element (uniqueness follows from the contraction principle) of P-1x∩Г which satisfies the equation y(x) = Py(x) + (I-P)Wy(x).
(2) The function y just defined is continuous from PГ into B.
(3) There are no fixed points of PWy on the boundary of PГ, so that the (finite- dimensional) fixed point index i(PWy, int PГ) is defined.
Definition. If the Cesari fixed point method applies to (Г, W, P) then define i(Г, W, P) to be the index i(PWy, int PГ).
The three theorems of this thesis can now be easily stated.
Theorem 1 (Cesari). If i(Г, W, P) is defined and i(Г, W, P) ≠0, then there is a fixed point of W in Г.
Theorem 2. Let the Cesari fixed point method apply to both (Г, W, P1) and (Г, W, P2). Assume that P2P1=P1P2=P1 and assume that either of the following two conditions holds:
(1) For every b in B and every z in the range of P2, we have that ‖b=P2b‖ ≤ ‖b-z‖
(2)P2Г is convex.
Then i(Г, W, P1) = i(Г, W, P2).
Theorem 3. If Ω is a bounded open set and W is a compact operator defined on Ω so that the (infinite-dimensional) Leray-Schauder index iLS(W, Ω) is defined, and if the Cesari fixed point method applies to (Ω, W, P), then i(Ω, W, P) = iLS(W, Ω).
Theorems 2 and 3 are proved using mainly a homotopy theorem and a reduction theorem for the finite-dimensional and the Leray-Schauder indexes. These and other properties of indexes will be listed before the theorem in which they are used.
Resumo:
mark Unsteady ejectors can be driven by a wide range of driver jets. These vary from pulse detonation engines, which typically have a long gap between each slug of fluid exiting the detonation tube (mark-space ratios in the range 0.1-0.2) to the exit of a pulsejet where the mean mass flow rate leads to a much shorter gap between slugs (mark-space ratios in the range 2-3). The aim of this paper is to investigate the effect of mark-space ratio on the thrust augmentation of an unsteady ejector. Experimental testing was undertaken using a driver jet with a sinusoidal exit velocity profile. The mean value, amplitude and frequency of the velocity profile could be changed allowing the length to diameter ratio of the fluid slugs L/D and the mark-space ratio (the ratio of slug length to the spacing between slugs) L/S to be varied. The setup allowed L/S of the jet to vary from 0.8 to 2.3, while the L/D ratio of the slugs could take any values between 3.5 and 7.5. This paper shows that as the mark-space ratio of the driver jet is increased the thrust augmentation drops. Across the range of mark-space ratios tested, there is shown to be a drop in thrust augmentation of 0.1. The physical cause of this reduction in thrust augmentation is shown to be a decrease in the percentage time over which the ejector entrains ambient fluid. This is the direct result ofthe space between consecutive slugs in the driver jet decreasing. The one dimensional model reported in Heffer et al. [1] is extended to include the effect of varying L/S and is shown to accurately capture the experimentally measured behavior ofthe ejector. Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process. The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks.
Resumo:
The interaction of a turbulent eddy with a semi-infinite, poroelastic edge is examined with respect to the effects of both elasticity and porosity on the efficiency of aerodynamic noise generation. The edge is modelled as a thin plate poroelastic plate, which is known to admit fifth-, sixth-, and seventh-power noise dependences on a characteristic velocity U of the turbulent eddy. The associated acoustic scattering problem is solved using the Wiener-Hopf technique for the case of constant plate properties. For the special cases of porous-rigid and impermeable-elastic plate conditions, asymptotic analysis of the Wiener- Hopf kernel function furnishes the parameter groups and their ranges where U5, U6, and U7 behaviours are expected to occur. Results from this analysis attempt to help guide the search for passive edge treatments to reduce trailing-edge noise that are inspired by the wing features of silently flying owls. Furthermore, the appropriateness of the present model to the owl noise problem is discussed with respect to the acoustic frequencies of interest, wing chord-lengths, and foraging behaviour across a representative set of owl species.
Resumo:
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.
Resumo:
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e. g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Resumo:
The catalytic performance of Ir-based catalysts was investigated for the reduction of NO under lean-burn conditions over binderless Ir/ZSM-5 monoliths, which were prepared by a vapor phase transport (VPT) technique. The catalytic activity was found to be dependent not only on the Ir content, but also on the ZSM-5 loading of the monolith. With the decreasing of the Ir content or the increasing of the ZSM-5 loading of the monolith, NO conversion increased. When the ZSM-5 loading on the cordierite monolith was raised up to ca. 11% and the metal Ir content was about 5 g/l, the NO conversion reached its maximum value of 73% at 533 K and SV of 20 000 h(-1). Furthermore, both the presence of 10% water vapor in the feed gas and the variation of space velocity of the reaction gases have little effect on the NO conversion. A comparative test between Ir/ZSM-5 and Cu/ZSM-5, as well as the variation of the feed gas compositions, revealed that Ir/ZSM-5 is very active for the reduction of NO by CO under lean conditions, although it is a poor catalyst for the C3H8-SCR process. This unique property of Ir/ZSM-5 makes it superior to the traditional three-way catalyst (TWC) for NO reduction under lean conditions. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Lloyd, Noel G., and Pearson, Jane M., 'Space saving calculation of symbolic resultants', Mathematics in Computer Science, 1 (2007), 267-290.
Resumo:
Families of missing people are often understood as inhabiting a particular space of ambiguity, captured in the phrase ‘living in limbo’ (Holmes, 2008). To explore this uncertain ground, we interviewed 25 family members to consider how human absence is acted upon and not just felt within this space ‘in between’ grief and loss (Wayland, 2007). In the paper, we represent families as active agents in spatial stories of ‘living in limbo’, and we provide insights into the diverse strategies of search/ing (technical, physical and emotional) in which they engage to locate either their missing member or news of them. Responses to absence are shown to be intimately bound up with unstable spatial knowledges of the missing person and emotional actions that are subject to change over time. We suggest that practices of search are not just locative actions, but act as transformative processes providing insights into how families inhabit emotional dynamism and transition in response to the on-going ‘missing situation’ and ambiguous loss (Boss, 1999, 2013).