13 resultados para DETERMINES

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first thesis topic is a perturbation method for resonantly coupled nonlinear oscillators. By successive near-identity transformations of the original equations, one obtains new equations with simple structure that describe the long time evolution of the motion. This technique is related to two-timing in that secular terms are suppressed in the transformation equations. The method has some important advantages. Appropriate time scalings are generated naturally by the method, and don't need to be guessed as in two-timing. Furthermore, by continuing the procedure to higher order, one extends (formally) the time scale of valid approximation. Examples illustrate these claims. Using this method, we investigate resonance in conservative, non-conservative and time dependent problems. Each example is chosen to highlight a certain aspect of the method.

The second thesis topic concerns the coupling of nonlinear chemical oscillators. The first problem is the propagation of chemical waves of an oscillating reaction in a diffusive medium. Using two-timing, we derive a nonlinear equation that determines how spatial variations in the phase of the oscillations evolves in time. This result is the key to understanding the propagation of chemical waves. In particular, we use it to account for certain experimental observations on the Belusov-Zhabotinskii reaction.

Next, we analyse the interaction between a pair of coupled chemical oscillators. This time, we derive an equation for the phase shift, which measures how much the oscillators are out of phase. This result is the key to understanding M. Marek's and I. Stuchl's results on coupled reactor systems. In particular, our model accounts for synchronization and its bifurcation into rhythm splitting.

Finally, we analyse large systems of coupled chemical oscillators. Using a continuum approximation, we demonstrate mechanisms that cause auto-synchronization in such systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.

As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.

One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.

Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce---for the first time---general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.

The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.

From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis has two major parts. The first part of the thesis will describe a high energy cosmic ray detector -- the High Energy Isotope Spectrometer Telescope (HEIST). HEIST is a large area (0.25 m2sr) balloon-borne isotope spectrometer designed to make high-resolution measurements of isotopes in the element range from neon to nickel (10 ≤ Z ≤ 28) at energies of about 2 GeV/nucleon. The instrument consists of a stack of 12 NaI(Tl) scintilla tors, two Cerenkov counters, and two plastic scintillators. Each of the 2-cm thick NaI disks is viewed by six 1.5-inch photomultipliers whose combined outputs measure the energy deposition in that layer. In addition, the six outputs from each disk are compared to determine the position at which incident nuclei traverse each layer to an accuracy of ~2 mm. The Cerenkov counters, which measure particle velocity, are each viewed by twelve 5-inch photomultipliers using light integration boxes.

HEIST-2 determines the mass of individual nuclei by measuring both the change in the Lorentz factor (Δγ) that results from traversing the NaI stack, and the energy loss (ΔΕ) in the stack. Since the total energy of an isotope is given by Ε = γM, the mass M can be determined by M = ΔΕ/Δγ. The instrument is designed to achieve a typical mass resolution of 0.2 amu.

The second part of this thesis presents an experimental measurement of the isotopic composition of the fragments from the breakup of high energy 40Ar and 56Fe nuclei. Cosmic ray composition studies rely heavily on semi-empirical estimates of the cross-sections for the nuclear fragmentation reactions which alter the composition during propagation through the interstellar medium. Experimentally measured yields of isotopes from the fragmentation of 40Ar and 56Fe are compared with calculated yields based on semi-empirical cross-section formulae. There are two sets of measurements. The first set of measurements, made at the Lawrence Berkeley Laboratory Bevalac using a beam of 287 MeV/nucleon 40Ar incident on a CH2 target, achieves excellent mass resolution (σm ≤ 0.2 amu) for isotopes of Mg through K using a Si(Li) detector telescope. The second set of measurements, also made at the Lawrence Berkeley Laboratory Bevalac, using a beam of 583 MeV/nucleon 56FeFe incident on a CH2 target, resolved Cr, Mn, and Fe fragments with a typical mass resolution of ~ 0.25 amu, through the use of the Heavy Isotope Spectrometer Telescope (HIST) which was later carried into space on ISEE-3 in 1978. The general agreement between calculation and experiment is good, but some significant differences are reported here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transcription factor p53 is the most commonly altered gene in human cancer. As a redox-active protein in direct contact with DNA, p53 can directly sense oxidative stress through DNA-mediated charge transport. Electron hole transport occurs with a shallow distance dependence over long distances through the π-stacked DNA bases, leading to the oxidation and dissociation of DNA-bound p53. The extent of p53 dissociation depends upon the redox potential of the response element DNA in direct contact with each p53 monomer. The DNA sequence dependence of p53 oxidative dissociation was examined by electrophoretic mobility shift assays using radiolabeled oligonucleotides containing both synthetic and human p53 response elements with an appended anthraquinone photooxidant. Greater p53 dissociation is observed from DNA sequences containing low redox potential purine regions, particularly guanine triplets, within the p53 response element. Using denaturing polyacrylamide gel electrophoresis of irradiated anthraquinone-modified DNA, the DNA damage sites, which correspond to locations of preferred electron hole localization, were determined. The resulting DNA damage preferentially localizes to guanine doublets and triplets within the response element. Oxidative DNA damage is inhibited in the presence of p53, however, only at DNA sites within the response element, and therefore in direct contact with p53. From these data, predictions about the sensitivity of human p53-binding sites to oxidative stress, as well as possible biological implications, have been made. On the basis of our data, the guanine pattern within the purine region of each p53-binding site determines the response of p53 to DNA-mediated oxidation, yielding for some sequences the oxidative dissociation of p53 from a distance and thereby providing another potential role for DNA charge transport chemistry within the cell.

To determine whether the change in p53 response element occupancy observed in vitro also correlates in cellulo, chromatin immunoprecipition (ChIP) and quantitative PCR (qPCR) were used to directly quantify p53 binding to certain response elements in HCT116N cells. The HCT116N cells containing a wild type p53 were treated with the photooxidant [Rh(phi)2bpy]3+, Nutlin-3 to upregulate p53, and subsequently irradiated to induce oxidative genomic stress. To covalently tether p53 interacting with DNA, the cells were fixed with disuccinimidyl glutarate and formaldehyde. The nuclei of the harvested cells were isolated, sonicated, and immunoprecipitated using magnetic beads conjugated with a monoclonal p53 antibody. The purified immounoprecipiated DNA was then quantified via qPCR and genomic sequencing. Overall, the ChIP results were significantly varied over ten experimental trials, but one trend is observed overall: greater variation of p53 occupancy is observed in response elements from which oxidative dissociation would be expected, while significantly less change in p53 occupancy occurs for response elements from which oxidative dissociation would not be anticipated.

The chemical oxidation of transcription factor p53 via DNA CT was also investigated with respect to the protein at the amino acid level. Transcription factor p53 plays a critical role in the cellular response to stress stimuli, which may be modulated through the redox modulation of conserved cysteine residues within the DNA-binding domain. Residues within p53 that enable oxidative dissociation are herein investigated. Of the 8 mutants studied by electrophoretic mobility shift assay (EMSA), only the C275S mutation significantly decreased the protein affinity (KD) for the Gadd45 response element. EMSA assays of p53 oxidative dissociation promoted by photoexcitation of anthraquinone-tethered Gadd45 oligonucleotides were used to determine the influence of p53 mutations on oxidative dissociation; mutation to C275S severely attenuates oxidative dissociation while C277S substantially attenuates dissociation. Differential thiol labeling was used to determine the oxidation states of cysteine residues within p53 after DNA-mediated oxidation. Reduced cysteines were iodoacetamide labeled, while oxidized cysteines participating in disulfide bonds were 13C2D2-iodoacetamide labeled. Intensities of respective iodoacetamide-modified peptide fragments were analyzed using a QTRAP 6500 LC-MS/MS system, quantified with Skyline, and directly compared. A distinct shift in peptide labeling toward 13C2D2-iodoacetamide labeled cysteines is observed in oxidized samples as compared to the respective controls. All of the observable cysteine residues trend toward the heavy label under conditions of DNA CT, indicating the formation of multiple disulfide bonds potentially among the C124, C135, C141, C182, C275, and C277. Based on these data it is proposed that disulfide formation involving C275 is critical for inducing oxidative dissociation of p53 from DNA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oxygenic photosynthesis fundamentally transformed our planet by releasing molecular oxygen and altering major biogeochemical cycles, and this exceptional metabolism relies on a redox-active cubane cluster of four manganese atoms. Not only is manganese essential for producing oxygen, but manganese is also only oxidized by oxygen and oxygen-derived species. Thus the history of manganese oxidation provides a valuable perspective on our planet’s environmental past, the ancient availability of oxygen, and the evolution of oxygenic photosynthesis. Broadly, the general trends of the geologic record of manganese deposition is a chronicle of ancient manganese oxidation: manganese is introduced into the fluid Earth as Mn(II) and it will remain only a trace component in sedimentary rocks until it is oxidized, forming Mn(III,IV) insoluble precipitates that are concentrated in the rock record. Because these manganese oxides are highly favorable electron acceptors, they often undergo reduction in sediments through anaerobic respiration and abiotic reaction pathways.

The following dissertation presents five chapters investigating manganese cycling both by examining ancient examples of manganese enrichments in the geologic record and exploring the mineralogical products of various pathways of manganese oxide reduction that may occur in sediments. The first chapter explores the mineralogical record of manganese and reports abundant manganese reduction recorded in six representative manganese-enriched sedimentary sequences. This is followed by a second chapter that further analyzes the earliest significant manganese deposit 2.4 billon years ago, and determines that it predated the origin of oxygenic photosynthesis and thus is supporting evidence for manganese-oxidizing photosynthesis as an evolutionary precursor prior to oxygenic photosynthesis. The lack of oxygen during this early manganese deposition was partially established using oxygen-sensitive detrital grains, and so a third chapter delves into what these grains mean for oxygen constraints using a mathematical model. The fourth chapter returns to processes affecting manganese post-deposition, and explores the relationships between manganese mineral products and (bio)geochemical reduction processes to understand how various manganese minerals can reveal ancient environmental conditions and biological metabolisms. Finally, a fifth chapter considers whether manganese can be mobilized and enriched in sedimentary rocks and determines that manganese was concentrated secondarily in a 2.5 billion-year-old example from South Africa. Overall, this thesis demonstrates how microbial processes, namely photosynthesis and metal oxide-reducing metabolisms, are linked to and recorded in the rich complexity of the manganese mineralogical record.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let F = Ǫ(ζ + ζ –1) be the maximal real subfield of the cyclotomic field Ǫ(ζ) where ζ is a primitive qth root of unity and q is an odd rational prime. The numbers u1=-1, uk=(ζk-k)/(ζ-ζ-1), k=2,…,p, p=(q-1)/2, are units in F and are called the cyclotomic units. In this thesis the sign distribution of the conjugates in F of the cyclotomic units is studied.

Let G(F/Ǫ) denote the Galoi's group of F over Ǫ, and let V denote the units in F. For each σϵ G(F/Ǫ) and μϵV define a mapping sgnσ: V→GF(2) by sgnσ(μ) = 1 iff σ(μ) ˂ 0 and sgnσ(μ) = 0 iff σ(μ) ˃ 0. Let {σ1, ... , σp} be a fixed ordering of G(F/Ǫ). The matrix Mq=(sgnσj(vi) ) , i, j = 1, ... , p is called the matrix of cyclotomic signatures. The rank of this matrix determines the sign distribution of the conjugates of the cyclotomic units. The matrix of cyclotomic signatures is associated with an ideal in the ring GF(2) [x] / (xp+ 1) in such a way that the rank of the matrix equals the GF(2)-dimension of the ideal. It is shown that if p = (q-1)/ 2 is a prime and if 2 is a primitive root mod p, then Mq is non-singular. Also let p be arbitrary, let ℓ be a primitive root mod q and let L = {i | 0 ≤ i ≤ p-1, the least positive residue of defined by ℓi mod q is greater than p}. Let Hq(x) ϵ GF(2)[x] be defined by Hq(x) = g. c. d. ((Σ xi/I ϵ L) (x+1) + 1, xp + 1). It is shown that the rank of Mq equals the difference p - degree Hq(x).

Further results are obtained by using the reciprocity theorem of class field theory. The reciprocity maps for a certain abelian extension of F and for the infinite primes in F are associated with the signs of conjugates. The product formula for the reciprocity maps is used to associate the signs of conjugates with the reciprocity maps at the primes which lie above (2). The case when (2) is a prime in F is studied in detail. Let T denote the group of totally positive units in F. Let U be the group generated by the cyclotomic units. Assume that (2) is a prime in F and that p is odd. Let F(2) denote the completion of F at (2) and let V(2) denote the units in F(2). The following statements are shown to be equivalent. 1) The matrix of cyclotomic signatures is non-singular. 2) U∩T = U2. 3) U∩F2(2) = U2. 4) V(2)/ V(2)2 = ˂v1 V(2)2˃ ʘ…ʘ˂vp V(2)2˃ ʘ ˂3V(2)2˃.

The rank of Mq was computed for 5≤q≤929 and the results appear in tables. On the basis of these results and additional calculations the following conjecture is made: If q and p = (q -1)/ 2 are both primes, then Mq is non-singular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem considered is that of minimizing the drag of a symmetric plate in infinite cavity flow under the constraints of fixed arclength and fixed chord. The flow is assumed to be steady, irrotational, and incompressible. The effects of gravity and viscosity are ignored.

Using complex variables, expressions for the drag, arclength, and chord, are derived in terms of two hodograph variables, Γ (the logarithm of the speed) and β (the flow angle), and two real parameters, a magnification factor and a parameter which determines how much of the plate is a free-streamline.

Two methods are employed for optimization:

(1) The parameter method. Γ and β are expanded in finite orthogonal series of N terms. Optimization is performed with respect to the N coefficients in these series and the magnification and free-streamline parameters. This method is carried out for the case N = 1 and minimum drag profiles and drag coefficients are found for all values of the ratio of arclength to chord.

(2) The variational method. A variational calculus method for minimizing integral functionals of a function and its finite Hilbert transform is introduced, This method is applied to functionals of quadratic form and a necessary condition for the existence of a minimum solution is derived. The variational method is applied to the minimum drag problem and a nonlinear integral equation is derived but not solved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.

Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.

Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.

To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.