972 resultados para Strictly hyperbolic polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The energy loss of protons and deuterons in D_2O ice has been measured over the energy range, E_p 18 - 541 kev. The double focusing magnetic spectrometer was used to measure the energy of the particles after they had traversed a known thickness of the ice target. One method of measurement is used to determine relative values of the stopping cross section as a function of energy; another method measures absolute values. The results are in very good agreement with the values calculated from Bethe’s semi-empirical formula. Possible sources of error are considered and the accuracy of the measurements is estimated to be ± 4%.

The D(dp)H^3 cross section has been measured by two methods. For E_D = 200 - 500 kev the spectrometer was used to obtain the momentum spectrum of the protons and tritons. From the yield and stopping cross section the reaction cross section at 90° has been obtained.

For E_D = 35 – 550 kev the proton yield from a thick target was differentiated to obtain the cross section. Both thin and thick target methods were used to measure the yield at each of ten angles. The angular distribution is expressed in terms of a Legendre polynomial expansion. The various sources of experimental error are considered in detail, and the probable error of the cross section measurements is estimated to be ± 5%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interaction of shaped laser pulses with plasmas is studied in a strict theoretical framework without adopting the slow-varying envelope approximation (SVEA). Any physical quantities involved in the interaction are denoted as a summation of different real quantities of respective phases. The relationships among the phases of those real quantities and their moduli are strictly analyzed. Such strict analyses lead to a more exact equation set for the three-dimensional envelope of the laser pulse, which is not based on SVEA. Based on this equation set, self-focusing, Raman, and modulation instabilities could be discussed in a unified framework. The solutions of this equation set for the laser envelope reveal many possible multicolor laser modes in plasmas. The energy and the shape of a pulse determine its propagation through plasmas in a multicolor mode or in a monochromic mode. A global growth rate is introduced to measure the speed of the transition from the monochromic mode in vacuum to a possible mode in plasmas. (c) 2006 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metal complexes that utilize the 9,10-phenanthrene quinone diimine (phi) moiety bind to DNA through the major groove. These metallointercalators can recognize DNA sites and perform reactions on DNA as a substrate. The site-specific metallointercalator Λ-1-Rh(MGP)_2phi^(5+) competitively disrupts the major groove binding of a transcription factor, yAP-1, from an oligonucleotide that contains a common binding site. The demonstration that metal complexes can prevent transcription factor binding to DNA site-specifically is an important step in using metallointercalators as therapeutics.

The distinctive photochemistry of metallointercalators can also be applied to promote long range charge transport in DNA. Experiments using duplexes with regions 4 to 10 nucleotides long containing strictly adenine and thymine sequences of varying order showed that radical migration is more dependent on the sequence of bases, and less dependent on the distance between the guanine doublets. This result suggests that mechanistic proposals of long range charge transport must involve all the bases.

RNA/DNA hybrids show charge migration to guanines from a remote site, thus demonstrating that nucleic acid stacking other than B-form can serve as a radical bridge. Double crossover DNA assemblies also provide a medium for charge transport at distances up to 100 Å from the site of radical introduction by a tethered metal complex. This radical migration was found to be robust to mismatches, and limited to individual, electronically distinct base stacks. In single DNA crossover assemblies, which have considerably greater flexibility, charge migration proceeds to both base stacks due to conformational isomers not present in the rigid and tightly annealed double crossovers.

Finally, a rapid, efficient, gel-based technique was developed to investigate thymine dimer repair. Two oligonucleotides, one radioactively labeled, are photoligated via the bases of a thymine-thymine interface; reversal of this ligation is easily visualized by gel electrophoresis. This assay was used to show that the repair of thymine dimers from a distance through DNA charge transport can be accomplished with different photooxidants.

Thus, nucleic acids that support long range charge transport have been shown to include A-track DNA, RNA/DNA hybrids, and single and double crossovers, and a method for thymine dimer repair detection using charge transport was developed. These observations underscore and extend the remarkable finding that DNA can serve a medium for charge transport via the heteroaromatic base stack.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trata-se de estudo dirigido à afirmação da natureza objetiva da responsabilidade objetiva da Administração Pública por atos omissivos. Defende-se a correta aplicação do artigo 37 parágrafo 6 da Constituição da República, em que se fora estabelecida a responsabilidade objetiva da Administração em todas as hipóteses em que esteja configurado nexo causal entre sua atuação, comissiva ou omissiva, e um dano injusto ocorrido. É novo o enfoque que norteia a reparação civil, não mais a atividade realizada pelo agente, mas as conseqüências sofridas pela vítima deste dano injusto. Exercitada na seara da responsabilidade civil do Estado, considerado em sentido lato, parece ainda mais lógica a mudança de enfoque mencionada, em razão do princípio norteador do seu dever de reparar, que é o da repartição eqüitativa dos encargos da Administração. De fato, sempre que a atividade administrativa estatal, exercida em benefício de toda a coletividade, gerar dano injusto a um particular específico, configurar-se-á sua responsabilidade de reparar este dano, já que, se é em nome da coletividade que se adotou a conduta geradora do dano, esta a idéia principal daquela diretriz enunciada. Daí por que a verificação da presença do elemento subjetivo culpa, em sede de responsabilidade do Poder Público, fora tornada inteiramente estranho ao exame. A correta leitura do artigo constitucional, com reconhecimento da responsabilidade objetiva do Estado nas hipóteses de ato comissivo e omissivo da Administração Pública, realiza, ainda, o princípio da solidariedade social, que implica preponderância do interesse da reparação da vítima lesada sobre o interesse do agente que realiza, comissiva ou omissivamente, o ato lesivo. Essa a legitimidade da teoria do risco administrativo adotada, a adoção de coerente verificação do nexo causal, com admissão da oposição de excludentes de responsabilidade. Ademais, entre a vítima e o autor do dano injusto, a primeira não obtém, em geral, beneficio algum com o fato ou a atividade de que se originou o dano. Se assim é, a configuração do dever de indenizar da Administração Pública dependerá, apenas, da comprovação, no caso concreto, de três pressupostos que se somam: a atuação do Estado, a configuração do dano injusto e o nexo de causalidade. Será referida a jurisprudência espanhola consagrada à regra de responsabilidade objetiva da Administração Pública por atos omissivos, com considerações acerca da resposta da Jurisprudência daquele país ao respectivo enunciado normativo. Buscou-se, desta forma, elencar-se os elementos básicos à compreensão do tema, e também os pressupostos essenciais à afirmação da natureza objetiva da responsabilidade da Administração Pública por atos omissivos, que são, primordialmente, a compreensão do fundamento da regra constitucional, a correta delimitação do conceito de omissão e de causalidade omissiva. Destacados os pressupostos necessários à correta compreensão do tema, conclui-se pela afirmação da natureza objetiva da responsabilidade da Administração Pública por dano injusto advindo de ato omissivo, desde que assim o seja, querendo-se significar, desta forma, que a responsabilidade mencionada não prescinde da configuração do nexo causal entre o comportamento omissivo ocorrido e o dano injusto que se quer reparar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A noncommutative 2-torus is one of the main toy models of noncommutative geometry, and a noncommutative n-torus is a straightforward generalization of it. In 1980, Pimsner and Voiculescu in [17] described a 6-term exact sequence, which allows for the computation of the K-theory of noncommutative tori. It follows that both even and odd K-groups of n-dimensional noncommutative tori are free abelian groups on 2n-1 generators. In 1981, the Powers-Rieffel projector was described [19], which, together with the class of identity, generates the even K-theory of noncommutative 2-tori. In 1984, Elliott [10] computed trace and Chern character on these K-groups. According to Rieffel [20], the odd K-theory of a noncommutative n-torus coincides with the group of connected components of the elements of the algebra. In particular, generators of K-theory can be chosen to be invertible elements of the algebra. In Chapter 1, we derive an explicit formula for the First nontrivial generator of the odd K-theory of noncommutative tori. This gives the full set of generators for the odd K-theory of noncommutative 3-tori and 4-tori.

In Chapter 2, we apply the graded-commutative framework of differential geometry to the polynomial subalgebra of the noncommutative torus algebra. We use the framework of differential geometry described in [27], [14], [25], [26]. In order to apply this framework to noncommutative torus, the notion of the graded-commutative algebra has to be generalized: the "signs" should be allowed to take values in U(1), rather than just {-1,1}. Such generalization is well-known (see, e.g., [8] in the context of linear algebra). We reformulate relevant results of [27], [14], [25], [26] using this extended notion of sign. We show how this framework can be used to construct differential operators, differential forms, and jet spaces on noncommutative tori. Then, we compare the constructed differential forms to the ones, obtained from the spectral triple of the noncommutative torus. Sections 2.1-2.3 recall the basic notions from [27], [14], [25], [26], with the required change of the notion of "sign". In Section 2.4, we apply these notions to the polynomial subalgebra of the noncommutative torus algebra. This polynomial subalgebra is similar to a free graded-commutative algebra. We show that, when restricted to the polynomial subalgebra, Connes construction of differential forms gives the same answer as the one obtained from the graded-commutative differential geometry. One may try to extend these notions to the smooth noncommutative torus algebra, but this was not done in this work.

A reconstruction of the Beilinson-Bloch regulator (for curves) via Fredholm modules was given by Eugene Ha in [12]. However, the proof in [12] contains a critical gap; in Chapter 3, we close this gap. More specifically, we do this by obtaining some technical results, and by proving Property 4 of Section 3.7 (see Theorem 3.9.4), which implies that such reformulation is, indeed, possible. The main motivation for this reformulation is the longer-term goal of finding possible analogs of the second K-group (in the context of algebraic geometry and K-theory of rings) and of the regulators for noncommutative spaces. This work should be seen as a necessary preliminary step for that purpose.

For the convenience of the reader, we also give a short description of the results from [12], as well as some background material on central extensions and Connes-Karoubi character.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We will prove that, for a 2 or 3 component L-space link, HFL- is completely determined by the multi-variable Alexander polynomial of all the sub-links of L, as well as the pairwise linking numbers of all the components of L. We will also give some restrictions on the multi-variable Alexander polynomial of an L-space link. Finally, we use the methods in this paper to prove a conjecture of Yajing Liu classifying all 2-bridge L-space links.