14 resultados para Existence and multiplicity of solutions

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we investigate the existence, uniqueness and asymptotic stability of solutions of a class of nonlinear integral equations which are representations for some time dependent non- linear partial differential equations. Sufficient conditions are established which allow one to infer the stability of the nonlinear equations from the stability of the linearized equations. Improved estimates of the domain of stability are obtained using a Liapunov Functional approach. These results are applied to some nonlinear partial differential equations governing the behavior of nonlinear continuous dynamical systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A general class of single degree of freedom systems possessing rate-independent hysteresis is defined. The hysteretic behavior in a system belonging to this class is depicted as a sequence of single-valued functions; at any given time, the current function is determined by some set of mathematical rules concerning the entire previous response of the system. Existence and uniqueness of solutions are established and boundedness of solutions is examined.

An asymptotic solution procedure is used to derive an approximation to the response of viscously damped systems with a small hysteretic nonlinearity and trigonometric excitation. Two properties of the hysteresis loops associated with any given system completely determine this approximation to the response: the area enclosed by each loop, and the average of the ascending and descending branches of each loop.

The approximation, supplemented by numerical calculations, is applied to investigate the steady-state response of a system with limited slip. Such features as disconnected response curves and jumps in response exist for a certain range of system parameters for any finite amount of slip.

To further understand the response of this system, solutions of the initial-value problem are examined. The boundedness of solutions is investigated first. Then the relationship between initial conditions and resulting steady-state solution is examined when multiple steady-state solutions exist. Using the approximate analysis and numerical calculations, it is found that significant regions of initial conditions in the initial condition plane lead to the different asymptotically stable steady-state solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of the existence and stability of periodic solutions of infinite-lag integra-differential equations is considered. Specifically, the integrals involved are of the convolution type with the dependent variable being integrated over the range (- ∞,t), as occur in models of population growth. It is shown that Hopf bifurcation of periodic solutions from a steady state can occur, when a pair of eigenvalues crosses the imaginary axis. Also considered is the existence of traveling wave solutions of a model population equation allowing spatial diffusion in addition to the usual temporal variation. Lastly, the stability of the periodic solutions resulting from Hopf bifurcation is determined with aid of a Floquet theory.

The first chapter is devoted to linear integro-differential equations with constant coefficients utilizing the method of semi-groups of operators. The second chapter analyzes the Hopf bifurcation providing an existence theorem. Also, the two-timing perturbation procedure is applied to construct the periodic solutions. The third chapter uses two-timing to obtain traveling wave solutions of the diffusive model, as well as providing an existence theorem. The fourth chapter develops a Floquet theory for linear integro-differential equations with periodic coefficients again using the semi-group approach. The fifth chapter gives sufficient conditions for the stability or instability of a periodic solution in terms of the linearization of the equations. These results are then applied to the Hopf bifurcation problem and to a certain population equation modeling periodically fluctuating environments to deduce the stability of the corresponding periodic solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long linear polymers that are end-functionalized with associative groups were studied as additives to hydrocarbon fluids to mitigate the fire hazard associated with the presence of mist in a crash scenario. These polymers were molecularly designed to overcome both the shear-degradation of long polymer chains in turbulent flows, and the chain collapse induced by the random placement of associative groups along polymer backbones. Architectures of associative groups on the polymer chain ends that were tested included clusters of self-associative carboxyl groups and pairs of hetero-complementary associative units.

Linear polymers with clusters of discrete numbers of carboxyl groups on their chain ends were investigated first: an innovative synthetic strategy was devised to achieve unprecedented backbone lengths and precise control of the number of carboxyl groups on chain ends (N). We found that a very narrow range of N allows the co-existence of sufficient end-association strength and polymer solubility in apolar media. Subsequent steady-flow rheological study on solution behavior of such soluble polymers in apolar media revealed that the end-association of very long chains in apolar media leads to the formation of flower-like micelles interconnected by bridging chains, which trap significant fraction of polymer chains into looped structures with low contribution to mist-control. The efficacy of very long 1,4-polybutadiene chains end-functionalized with clusters of four carboxyl groups as mist-control additives for jet fuel was further tested. In addition to being shear-resistant, the polymer was found capable of providing fire-protection to jet fuel at concentrations as low as 0.3wt%. We also found that this polymer has excellent solubility in jet fuel over a wide range of temperature (-30 to +70°C) and negligible interference with dewatering of jet fuel. It does not cause an adverse increase in viscosity at concentrations where mist-control efficacy exists.

Four pairs of hetero-complementary associative end-groups of varying strengths were subsequently investigated, in the hopes of achieving supramolecular aggregates with both mist-control ability and better utilization of polymer building blocks. Rheological study of solutions of the corresponding complementary associative polymer pairs in apolar media revealed the strength of complementary end-association required to achieve supramolecular aggregates capable of modulating rheological properties of the solution.

Both self-associating and complementary associating polymers have therefore been found to resist shear degradation. The successful strategy of building soluble, end-associative polymers with either self-associative or complementary associative groups will guide the next generation of mist-control technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

(1) Equation of State of Komatiite

The equation of state (EOS) of a molten komatiite (27 wt% MgO) was detennined in the 5 to 36 GPa pressure range via shock wave compression from 1550°C and 0 bar. Shock wave velocity, US, and particle velocity, UP, in km/s follow the linear relationship US = 3.13(±0.03) + 1.47(±0.03) UP. Based on a calculated density at 1550°C, 0 bar of 2.745±0.005 glee, this US-UP relationship gives the isentropic bulk modulus KS = 27.0 ± 0.6 GPa, and its first and second isentropic pressure derivatives, K'S = 4.9 ± 0.1 and K"S = -0.109 ± 0.003 GPa-1.

The calculated liquidus compression curve agrees within error with the static compression results of Agee and Walker [1988a] to 6 GPa. We detennine that olivine (FO94) will be neutrally buoyant in komatiitic melt of the composition we studied near 8.2 GPa. Clinopyroxene would also be neutrally buoyant near this pressure. Liquidus garnet-majorite may be less dense than this komatiitic liquid in the 20-24 GPa interval, however pyropic-garnet and perovskite phases are denser than this komatiitic liquid in their respective liquidus pressure intervals to 36 GPa. Liquidus perovskite may be neutrally buoyant near 70 GPa.

At 40 GPa, the density of shock-compressed molten komatiite would be approximately equal to the calculated density of an equivalent mixture of dense solid oxide components. This observation supports the model of Rigden et al. [1989] for compressibilities of liquid oxide components. Using their theoretical EOS for liquid forsterite and fayalite, we calculate the densities of a spectrum of melts from basaltic through peridotitic that are related to the experimentally studied komatiitic liquid by addition or subtraction of olivine. At low pressure, olivine fractionation lowers the density of basic magmas, but above 14 GPa this trend is reversed. All of these basic to ultrabasic liquids are predicted to have similar densities at 14 GPa, and this density is approximately equal to the bulk (PREM) mantle. This suggests that melts derived from a peridotitic mantle may be inhibited from ascending from depths greater than 400 km.

The EOS of ultrabasic magmas was used to model adiabatic melting in a peridotitic mantle. If komatiites are formed by >15% partial melting of a peridotitic mantle, then komatiites generated by adiabatic melting come from source regions in the lower transition zone (≈500-670 km) or the lower mantle (>670 km). The great depth of incipient melting implied by this model, and the melt density constraint mentioned above, suggest that komatiitic volcanism may be gravitationally hindered. Although komatiitic magmas are thought to separate from their coexisting crystals at a temperature =200°C greater than that for modern MORBs, their ultimate sources are predicted to be diapirs that, if adiabatically decompressed from initially solid mantle, were more than 700°C hotter than the sources of MORBs and derived from great depth.

We considered the evolution of an initially molten mantle, i.e., a magma ocean. Our model considers the thermal structure of the magma ocean, density constraints on crystal segregation, and approximate phase relationships for a nominally chondritic mantle. Crystallization will begin at the core-mantle boundary. Perovskite buoyancy at > 70 GPa may lead to a compositionally stratified lower mantle with iron-enriched mangesiowiistite content increasing with depth. The upper mantle may be depleted in perovskite components. Olivine neutral buoyancy may lead to the formation of a dunite septum in the upper mantle, partitioning the ocean into upper and lower reservoirs, but this septum must be permeable.

(2) Viscosity Measurement with Shock Waves

We have examined in detail the analytical method for measuring shear viscosity from the decay of perturbations on a corrugated shock front The relevance of initial conditions, finite shock amplitude, bulk viscosity, and the sensitivity of the measurements to the shock boundary conditions are discussed. The validity of the viscous perturbation approach is examined by numerically solving the second-order Navier-Stokes equations. These numerical experiments indicate that shock instabilities may occur even when the Kontorovich-D'yakov stability criteria are satisfied. The experimental results for water at 15 GPa are discussed, and it is suggested that the large effective viscosity determined by this method may reflect the existence of ice VII on the Rayleigh path of the Hugoniot This interpretation reconciles the experimental results with estimates and measurements obtained by other means, and is consistent with the relationship of the Hugoniot with the phase diagram for water. Sound waves are generated at 4.8 MHz at in the water experiments at 15 GPa. The existence of anelastic absorption modes near this frequency would also lead to large effective viscosity estimates.

(3) Equation of State of Molybdenum at 1400°C

Shock compression data to 96 GPa for pure molybdenum, initially heated to 1400°C, are presented. Finite strain analysis of the data gives a bulk modulus at 1400°C, K'S. of 244±2 GPa and its pressure derivative, K'OS of 4. A fit of shock velocity to particle velocity gives the coefficients of US = CO+S UP to be CO = 4.77±0.06 km/s and S = 1.43±0.05. From the zero pressure sound speed, CO, a bulk modulus of 232±6 GPa is calculated that is consistent with extrapolation of ultrasonic elasticity measurements. The temperature derivative of the bulk modulus at zero pressure, θKOSθT|P, is approximately -0.012 GPa/K. A thermodynamic model is used to show that the thermodynamic Grüneisen parameter is proportional to the density and independent of temperature. The Mie-Grüneisen equation of state adequately describes the high temperature behavior of molybdenum under the present range of shock loading conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Picric acid possesses the property, which is rare among strong electrolytes, of having a convenient distribution ratio between water and certain organic solvents such as benzene, chloroform, etc. Because of this property, picric acid offers peculiar advantages for studying the well known deviations of strong electrolytes from the law of mass action, for; by means of distribution experiments, the activities of picric acid in various aqueous solutions may be compared.

In order to interpret the results of such distribution experiments, it is necessary to know the degree of ionization of picric acid in aqueous solutions.

At least three series of determinations of the equivalent conductance of picric acid have been published, but the results are not concordant; and therefore, the degree of ionization cannot be calculated with any degree of certainty.

The object of the present investigation was to redetermine the conductance of picric acid solutions in order to obtain satisfactory data from which the degrees of ionization of its solutions might be calculated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In four chapters various aspects of earthquake source are studied.

Chapter I

Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.

Chapter II

The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.

Chapter III

Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."

Chapter IV

The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I. The binding of the intercalating dye ethidium bromide to closed circular SV 40 DNA causes an unwinding of the duplex structure and a simultaneous and quantitatively equivalent unwinding of the superhelices. The buoyant densities and sedimentation velocities of both intact (I) and singly nicked (II) SV 40 DNAs were measured as a function of free dye concentration. The buoyant density data were used to determine the binding isotherms over a dye concentration range extending from 0 to 600 µg/m1 in 5.8 M CsCl. At high dye concentrations all of the binding sites in II, but not in I, are saturated. At free dye concentrations less than 5.4 µg/ml, I has a greater affinity for dye than II. At a critical amount of dye bound I and II have equal affinities, and at higher dye concentration I has a lower affinity than II. The number of superhelical turns, τ, present in I is calculated at each dye concentration using Fuller and Waring's (1964) estimate of the angle of duplex unwinding per intercalation. The results reveal that SV 40 DNA I contains about -13 superhelical turns in concentrated salt solutions.

The free energy of superhelix formation is calculated as a function of τ from a consideration of the effect of the superhelical turns upon the binding isotherm of ethidium bromide to SV 40 DNA I. The value of the free energy is about 100 kcal/mole DNA in the native molecule. The free energy estimates are used to calculate the pitch and radius of the superhelix as a function of the number of superhelical turns. The pitch and radius of the native I superhelix are 430 Å and 135 Å, respectively.

A buoyant density method for the isolation and detection of closed circular DNA is described. The method is based upon the reduced binding of the intercalating dye, ethidium bromide, by closed circular DNA. In an application of this method it is found that HeLa cells contain in addition to closed circular mitochondrial DNA of mean length 4.81 microns, a heterogeneous group of smaller DNA molecules which vary in size from 0.2 to 3.5 microns and a paucidisperse group of multiples of the mitochondrial length.

II. The general theory is presented for the sedimentation equilibrium of a macromolecule in a concentrated binary solvent in the presence of an additional reacting small molecule. Equations are derived for the calculation of the buoyant density of the complex and for the determination of the binding isotherm of the reagent to the macrospecies. The standard buoyant density, a thermodynamic function, is defined and the density gradients which characterize the four component system are derived. The theory is applied to the specific cases of the binding of ethidium bromide to SV 40 DNA and of the binding of mercury and silver to DNA.