6 resultados para Optimality conditions

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we study Galois representations corresponding to abelian varieties with certain reduction conditions. We show that these conditions force the image of the representations to be "big," so that the Mumford-Tate conjecture (:= MT) holds. We also prove that the set of abelian varieties satisfying these conditions is dense in a corresponding moduli space.

The main results of the thesis are the following two theorems.

Theorem A: Let A be an absolutely simple abelian variety, End° (A) = k : imaginary quadratic field, g = dim(A). Assume either dim(A) ≤ 4, or A has bad reduction at some prime ϕ, with the dimension of the toric part of the reduction equal to 2r, and gcd(r,g) = 1, and (r,g) ≠ (15,56) or (m -1, m(m+1)/2). Then MT holds.

Theorem B: Let M be the moduli space of abelian varieties with fixed polarization, level structure and a k-action. It is defined over a number field F. The subset of M(Q) corresponding to absolutely simple abelian varieties with a prescribed stable reduction at a large enough prime ϕ of F is dense in M(C) in the complex topology. In particular, the set of simple abelian varieties having bad reductions with fixed dimension of the toric parts is dense.

Besides this we also established the following results:

(1) MT holds for some other classes of abelian varieties with similar reduction conditions. For example, if A is an abelian variety with End° (A) = Q and the dimension of the toric part of its reduction is prime to dim( A), then MT holds.

(2) MT holds for Ribet-type abelian varieties.

(3) The Hodge and the Tate conjectures are equivalent for abelian 4-folds.

(4) MT holds for abelian 4-folds of type II, III, IV (Theorem 5.0(2)) and some 4-folds of type I.

(5) For some abelian varieties either MT or the Hodge conjecture holds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The technique of variable-angle, electron energy-loss spectroscopy has been used to study the electronic spectroscopy of the diketene molecule. The experiment was performed using incident electron beam energies of 25 eV and 50 eV, and at scattering angles between 10° and 90°. The energy-loss region from 2 eV to 11 eV was examined. One spin-forbidden transition has been observed at 4.36 eV and three others that are spin-allowed have been located at 5.89 eV, 6.88 eV and 7.84 eV. Based on the intensity variation of these transitions with impact energy and scattering angle, and through analogy with simpler molecules, the first three transitions are tentatively assigned to an n → π* transition, a π - σ* (3s) Rydberg transition and a π → π* transition.

Thermal decomposition of chlorodifluoromethane, chloroform, dichloromethane and chloromethane under flash-vacuum pyrolysis conditions (900-1100°C) was investigated by the technique of electron energy-loss spectroscopy, using the impact energy of 50 eV and a scattering angle of 10°. The pyrolytic reaction follows a hydrogen-chloride α-elimination pathway. The difluoromethylene radical was produced from chlorodifluoromethane pyrolysis at 900°C and identified by its X^1 A_1 → A^1B_1 band at 5.04 eV.

Finally, a number of exploratory studies have been performed. The thermal decomposition of diketene was studied under flash vacuum pressures (1-10 mTorr) and temperatures ranging from 500°C to 1000°C. The complete decomposition of the diketene molecule into two ketene molecules was achieved at 900°C. The pyrolysis of trifluoromethyl iodide molecule at 1000°C produced an electron energy-loss spectrum with several iodine-atom, sharp peaks and only a small shoulder at 8.37 eV as a possible trifluoromethyl radical feature. The electron energy-loss spectrum of trichlorobromomethane at 900°C mainly showed features from bromine atom, chlorine molecule and tetrachloroethylene. Hexachloroacetone decomposed partially at 900°C, but showed well-defined features from chlorine, carbon monoxide and tetrachloroethylene molecules. Bromodichloromethane molecule was investigated at 1000°C and produced a congested, electron energy-loss spectrum with bromine-atom, hydrogen-bromide, hydrogen-chloride and tetrachloroethylene features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. Foehn winds of southern California.
An investigation of the hot, dry and dust laden winds occurring in the late fall and early winter in the Los Angeles Basin and attributed in the past to the influences of the desert regions to the north revealed that these currents were of a foehn nature. Their properties were found to be entirely due to dynamical heating produced in the descent from the high level areas in the interior to the lower Los Angeles Basin. Any dust associated with the phenomenon was found to be acquired from the Los Angeles area rather than transported from the desert. It was found that the frequency of occurrence of a mild type foehn of this nature during this season was sufficient to warrant its classification as a winter monsoon. This results from the topography of the Los Angeles region which allows an easy entrance to the air from the interior by virtue of the low level mountain passes north of the area. This monsoon provides the mild winter climate of southern California since temperatures associated with the foehn currents are far higher than those experienced when maritime air from the adjacent Pacific Ocean occupies the region.

II. Foehn wind cyclo-genesis.
Intense anticyclones frequently build up over the high level regions of the Great Basin and Columbia Plateau which lie between the Sierra Nevada and Cascade Mountains to the west and the Rocky Mountains to the east. The outflow from these anticyclones produce extensive foehns east of the Rockies in the comparatively low level areas of the middle west and the Canadian provinces of Alberta and Saskatchewan. Normally at this season of the year very cold polar continental air masses are present over this territory and with the occurrence of these foehns marked discontinuity surfaces arise between the warm foehn current, which is obliged to slide over a colder mass, and the Pc air to the east. Cyclones are easily produced from this phenomenon and take the form of unstable waves which propagate along the discontinuity surface between the two dissimilar masses. A continual series of such cyclones was found to occur as long as the Great Basin anticyclone is maintained with undiminished intensity.

III. Weather conditions associated with the Akron disaster.
This situation illustrates the speedy development and propagation of young disturbances in the eastern United States during the spring of the year under the influence of the conditionally unstable tropical maritime air masses which characterise the region. It also furnishes an excellent example of the superiority of air mass and frontal methods of weather prediction for aircraft operation over the older methods based upon pressure distribution.

IV. The Los Angeles storm of December 30, 1933 to January 1, 1934.
This discussion points out some of the fundamental interactions occurring between air masses of the North Pacific Ocean in connection with Pacific Coast storms and the value of topographic and aerological considerations in predicting them. Estimates of rainfall intensity and duration from analyses of this type may be made and would prove very valuable in the Los Angeles area in connection with flood control problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of global optimization of M phase-incoherent signals in N complex dimensions is formulated. Then, by using the geometric approach of Landau and Slepian, conditions for optimality are established for N = 2 and the optimal signal sets are determined for M = 2, 3, 4, 6, and 12.

The method is the following: The signals are assumed to be equally probable and to have equal energy, and thus are represented by points ṡi, i = 1, 2, …, M, on the unit sphere S1 in CN. If Wik is the halfspace determined by ṡi and ṡk and containing ṡi, i.e. Wik = {ṙϵCN:| ≥ | ˂ṙ, ṡk˃|}, then the Ʀi = ∩/k≠i Wik, i = 1, 2, …, M, the maximum likelihood decision regions, partition S1. For additive complex Gaussian noise ṅ and a received signal ṙ = ṡie + ṅ, where ϴ is uniformly distributed over [0, 2π], the probability of correct decoding is PC = 1/πN ∞/ʃ/0 r2N-1e-(r2+1)U(r)dr, where U(r) = 1/M M/Ʃ/i=1 Ʀi ʃ/∩ S1 I0(2r | ˂ṡ, ṡi˃|)dσ(ṡ), and r = ǁṙǁ.

For N = 2, it is proved that U(r) ≤ ʃ/Cα I0(2r|˂ṡ, ṡi˃|)dσ(ṡ) – 2K/M. h(1/2K [Mσ(Cα)-σ(S1)]), where Cα = {ṡϵS1:|˂ṡ, ṡi˃| ≥ α}, K is the total number of boundaries of the net on S1 determined by the decision regions, and h is the strictly increasing strictly convex function of σ(Cα∩W), (where W is a halfspace not containing ṡi), given by h = ʃ/Cα∩W I0 (2r|˂ṡ, ṡi˃|)dσ(ṡ). Conditions for equality are established and these give rise to the globally optimal signal sets for M = 2, 3, 4, 6, and 12.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.