44 resultados para Prove
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
The ability to regulate gene expression is of central importance for the adaptability of living organisms to changes in their internal and external environment. At the transcriptional level, binding of transcription factors (TFs) in the vicinity of promoters can modulate the rate at which transcripts are produced, and as such play an important role in gene regulation. TFs with regulatory action at multiple promoters is the rule rather than the exception, with examples ranging from TFs like the cAMP receptor protein (CRP) in E. coli that regulates hundreds of different genes, to situations involving multiple copies of the same gene, such as on plasmids, or viral DNA. When the number of TFs heavily exceeds the number of binding sites, TF binding to each promoter can be regarded as independent. However, when the number of TF molecules is comparable to the number of binding sites, TF titration will result in coupling ("entanglement") between transcription of different genes. The last few decades have seen rapid advances in our ability to quantitatively measure such effects, which calls for biophysical models to explain these data. Here we develop a statistical mechanical model which takes the TF titration effect into account and use it to predict both the level of gene expression and the resulting correlation in transcription rates for a general set of promoters. To test these predictions experimentally, we create genetic constructs with known TF copy number, binding site affinities, and gene copy number; hence avoiding the need to use free fit parameters. Our results clearly prove the TF titration effect and that the statistical mechanical model can accurately predict the fold change in gene expression for the studied cases. We also generalize these experimental efforts to cover systems with multiple different genes, using the method of mRNA fluorescence in situ hybridization (FISH). Interestingly, we can use the TF titration affect as a tool to measure the plasmid copy number at different points in the cell cycle, as well as the plasmid copy number variance. Finally, we investigate the strategies of transcriptional regulation used in a real organism by analyzing the thousands of known regulatory interactions in E. coli. We introduce a "random promoter architecture model" to identify overrepresented regulatory strategies, such as TF pairs which coregulate the same genes more frequently than would be expected by chance, indicating a related biological function. Furthermore, we investigate whether promoter architecture has a systematic effect on gene expression by linking the regulatory data of E. coli to genome-wide expression censuses.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.
Resumo:
Using the correction terms in Heegaard Floer homology, we prove that if a knot in S3 admits a positive integral T-, O-, or I-type surgery, it must have the same knot Floer homology as one of the knots given in our complete list, and the resulting manifold is orientation-preservingly homeomorphic to the p-surgery on the corresponding knot.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
On the materials scale, thermoelectric efficiency is defined by the dimensionless figure of merit zT. This value is made up of three material components in the form zT = Tα2/ρκ, where α is the Seebeck coefficient, ρ is the electrical resistivity, and κ is the total thermal conductivity. Therefore, in order to improve zT would require the reduction of κ and ρ while increasing α. However due to the inter-relation of the electrical and thermal properties of materials, typical routes to thermoelectric enhancement come in one of two forms. The first is to isolate the electronic properties and increase α without negatively affecting ρ. Techniques like electron filtering, quantum confinement, and density of states distortions have been proposed to enhance the Seebeck coefficient in thermoelectric materials. However, it has been difficult to prove the efficacy of these techniques. More recently efforts to manipulate the band degeneracy in semiconductors has been explored as a means to enhance α.
The other route to thermoelectric enhancement is through minimizing the thermal conductivity, κ. More specifically, thermal conductivity can be broken into two parts, an electronic and lattice term, κe and κl respectively. From a functional materials standpoint, the reduction in lattice thermal conductivity should have a minimal effect on the electronic properties. Most routes incorporate techniques that focus on the reduction of the lattice thermal conductivity. The components that make up κl (κl = 1/3Cνl) are the heat capacity (C), phonon group velocity (ν), and phonon mean free path (l). Since the difficulty is extreme in altering the heat capacity and group velocity, the phonon mean free path is most often the source of reduction.
Past routes to decreasing the phonon mean free path has been by alloying and grain size reduction. However, in these techniques the electron mobility is often negatively affected because in alloying any perturbation to the periodic potential can cause additional adverse carrier scattering. Grain size reduction has been another successful route to enhancing zT because of the significant difference in electron and phonon mean free paths. However, grain size reduction is erratic in anisotropic materials due to the orientation dependent transport properties. However, microstructure formation in both equilibrium and nonequilibrium processing routines can be used to effectively reduce the phonon mean free path as a route to enhance the figure of merit.
This work starts with a discussion of several different deliberate microstructure varieties. Control of the morphology and finally structure size and spacing is discussed at length. Since the material example used throughout this thesis is anisotropic a short primer on zone melting is presented as an effective route to growing homogeneous and oriented polycrystalline material. The resulting microstructure formation and control is presented specifically in the case of In2Te3-Bi2Te3 composites and the transport properties pertinent to thermoelectric materials is presented. Finally, the transport and discussion of iodine doped Bi2Te3 is presented as a re-evaluation of the literature data and what is known today.
Resumo:
Nicotinic receptors are the target of nicotine in the brain. They are pentameric ion channels. The pentamer structure allows many combinations of receptors to be formed. These various subtypes exhibit specific properties determined by their subunit composition. Each brain region contains a fixed complement of nicotinic receptor subunits. The midbrain region is of particular interest because the dopaminergic neurons of the midbrain express several subtypes of nicotinic receptors, and these dopaminergic neurons are important for the rewarding effects of nicotine. The α6 nicotinic receptor subunit has garnered intense interest because it is present in dopaminergic neurons but very few other brain regions. With its specific and limited presence in the brain, targeting this subtype of nicotinic receptor may prove advantageous as a method for smoking cessation. However, we do not fully understand the trafficking and membrane localization of this receptor or its effects on dopamine release in the striatum. We hypothesized that lynx1, a known modulator of other nicotinic receptor subtypes, is important for the proper function of α6 nicotinic receptors. lynx1 has been found to act upon several classes of nicotinic receptors, such as α4β2 and α7, the two most common subtypes in the brain. To determine whether lynx1 affects α6 containing nicotinic receptors we used biochemistry, patch clamp electrophysiology, fast scan cyclic voltammetry, and mouse behavior. We found that lynx1 has effects on α6 containing nicotinic receptors, but the effects were subtle. This thesis will detail the observed effects of lynx1 on α6 nicotinic receptors.
Resumo:
I. Foehn winds of southern California.
An investigation of the hot, dry and dust laden winds
occurring in the late fall and early winter in the Los Angeles
Basin and attributed in the past to the influences of the desert
regions to the north revealed that these currents were of a
foehn nature. Their properties were found to be entirely due
to dynamical heating produced in the descent from the high level
areas in the interior to the lower Los Angeles Basin. Any dust
associated with the phenomenon was found to be acquired from the
Los Angeles area rather than transported from the desert. It was
found that the frequency of occurrence of a mild type foehn of this
nature during this season was sufficient to warrant its classification
as a winter monsoon. This results from the topography of
the Los Angeles region which allows an easy entrance to the air
from the interior by virtue of the low level mountain passes north
of the area. This monsoon provides the mild winter climate of
southern California since temperatures associated with the foehn
currents are far higher than those experienced when maritime air
from the adjacent Pacific Ocean occupies the region.
II. Foehn wind cyclo-genesis.
Intense anticyclones frequently build up over the high level
regions of the Great Basin and Columbia Plateau which lie between
the Sierra Nevada and Cascade Mountains to the west and the Rocky
Mountains to the east. The outflow from these anticyclones produce
extensive foehns east of the Rockies in the comparatively low
level areas of the middle west and the Canadian provinces of
Alberta and Saskatchewan. Normally at this season of the year very
cold polar continental air masses are present over this territory
and with the occurrence of these foehns marked discontinuity surfaces
arise between the warm foehn current, which is obliged to slide over
a colder mass, and the Pc air to the east. Cyclones are
easily produced from this phenomenon and take the form of unstable
waves which propagate along the discontinuity surface between the
two dissimilar masses. A continual series of such cyclones was
found to occur as long as the Great Basin anticyclone is maintained
with undiminished intensity.
III. Weather conditions associated with the Akron disaster.
This situation illustrates the speedy development and
propagation of young disturbances in the eastern United States
during the spring of the year under the influence of the conditionally
unstable tropical maritime air masses which characterise the
region. It also furnishes an excellent example of the superiority
of air mass and frontal methods of weather prediction for aircraft
operation over the older methods based upon pressure distribution.
IV. The Los Angeles storm of December 30, 1933 to January 1, 1934.
This discussion points out some of the fundamental interactions
occurring between air masses of the North Pacific Ocean in connection
with Pacific Coast storms and the value of topographic and
aerological considerations in predicting them. Estimates of rainfall
intensity and duration from analyses of this type may be made and
would prove very valuable in the Los Angeles area in connection with
flood control problems.
Resumo:
In the five chapters that follow, I delineate my efforts over the last five years to synthesize structurally and chemically relevant models of the Oxygen Evolving Complex (OEC) of Photosystem II. The OEC is nature’s only water oxidation catalyst, in that it forms the dioxygen in our atmosphere necessary for oxygenic life. Therefore understanding its structure and function is of deep fundamental interest and could provide design elements for artificial photosynthesis and manmade water oxidation catalysts. Synthetic endeavors towards OEC mimics have been an active area of research since the mid 1970s and have mutually evolved alongside biochemical and spectroscopic studies, affording ever-refined proposals for the structure of the OEC and the mechanism of water oxidation. This research has culminated in the most recent proposal: a low symmetry Mn4CaO5 cluster with a distorted Mn3CaO4 cubane bridged to a fourth, dangling Mn. To give context for how my graduate work fits into this rich history of OEC research, Chapter 1 provides a historical timeline of proposals for OEC structure, emphasizing the role that synthetic Mn and MnCa clusters have played, and ending with our Mn3CaO4 heterometallic cubane complexes.
In Chapter 2, the triarylbenzene ligand framework used throughout my work is introduced, and trinuclear clusters of Mn, Co, and Ni are discussed. The ligand scaffold consistently coordinates three metals in close proximity while leaving coordination sites open for further modification through ancillary ligand binding. The ligands coordinated could be varied, with a range of carboxylates and some less coordinating anions studied. These complexes’ structures, magnetic behavior, and redox properties are discussed.
Chapter 3 explores the redox chemistry of the trimanganese system more thoroughly in the presence of a fourth Mn equivalent, finding a range of oxidation states and oxide incorporation dependent on oxidant, solvent, and Mn salt. Oxidation states from MnII4 to MnIIIMnIV3 were observed, with 1-4 O2– ligands incorporated, modeling the photoactivation of the OEC. These complexes were studied by X-ray diffraction, EPR, XAS, magnetometry, and CV.
As Ca2+ is a necessary component of the OEC, Chapter 4 discusses synthetic strategies for making highly structurally accurate models of the OEC containing both Mn and Ca in the Mn3CaO4 cubane + dangling Mn geometry. Structural and electrochemical characterization of the first Mn3CaO4 heterometallic cubane complex— and comparison to an all-Mn Mn4O4 analog—suggests a role for Ca2+ in the OEC. Modification of the Mn3CaO4 system by ligand substitution affords low symmetry Mn3CaO4 complexes that are the most accurate models of the OEC to date.
Finally, in Chapter 5 the reactivity of the Mn3CaO4 cubane complexes toward O- atom transfer is discussed. The metal M strongly affects the reactivity. The mechanisms of O-atom transfer and water incorporation from and into Mn4O4 and Mn4O3 clusters, respectively, are studied through computation and 18O-labeling studies. The μ3-oxos of the Mn4O4 system prove fluxional, lending support for proposals of O2– fluxionality within the OEC.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.
Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.
I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
This thesis is divided into three chapters. In the first chapter we study the smooth sets with respect to a Borel equivalence realtion E on a Polish space X. The collection of smooth sets forms σ-ideal. We think of smooth sets as analogs of countable sets and we show that an analog of the perfect set theorem for Σ11 sets holds in the context of smooth sets. We also show that the collection of Σ11 smooth sets is ∏11 on the codes. The analogs of thin sets are called sparse sets. We prove that there is a largest ∏11 sparse set and we give a characterization of it. We show that in L there is a ∏11 sparse set which is not smooth. These results are analogs of the results known for the ideal of countable sets, but it remains open to determine if large cardinal axioms imply that ∏11 sparse sets are smooth. Some more specific results are proved for the case of a countable Borel equivalence relation. We also study I(E), the σ-ideal of closed E-smooth sets. Among other things we prove that E is smooth iff I(E) is Borel.
In chapter 2 we study σ-ideals of compact sets. We are interested in the relationship between some descriptive set theoretic properties like thinness, strong calibration and the covering property. We also study products of σ-ideals from the same point of view. In chapter 3 we show that if a σ-ideal I has the covering property (which is an abstract version of the perfect set theorem for Σ11 sets), then there is a largest ∏11 set in Iint (i.e., every closed subset of it is in I). For σ-ideals on 2ω we present a characterization of this set in a similar way as for C1, the largest thin ∏11 set. As a corollary we get that if there are only countable many reals in L, then the covering property holds for Σ12 sets.
Resumo:
We will prove that, for a 2 or 3 component L-space link, HFL- is completely determined by the multi-variable Alexander polynomial of all the sub-links of L, as well as the pairwise linking numbers of all the components of L. We will also give some restrictions on the multi-variable Alexander polynomial of an L-space link. Finally, we use the methods in this paper to prove a conjecture of Yajing Liu classifying all 2-bridge L-space links.