11 resultados para power series distribution

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of a two-stream wind tunnel was undertaken to allow the simulation and study of certain features of the flow field around the blades of high-speed axial-flow turbomachineries. The mixing of the two parallel streams with designed Mach numbers respectively equal to 1.4 and 0.7 will simulate the transonic Mach number distribution generally obtained along the tips of the first stage blades in large bypass-fan engines.

The GALCIT hypersonic compressor plant will be used as an air supply for the wind tunnel, and consequently the calculations contained in the first chapter are derived from the characteristics and the performance of this plant.

The transonic part of the nozzle is computed by using a method developed by K. O. Friedrichs. This method consists essentially of expanding the coordinates and the characteristics of the flow in power series. The development begins with prescribing, more or less arbitrarily, a Mach number distribution along the centerline of the nozzle. This method has been programmed for an IBM 360 computer to define the wall contour of the nozzle.

A further computation is carried out to correct the contour for boundary layer buildup. This boundary layer analysis included geometry, pressure gradient, and Mach number effects. The subsonic nozzle is calculated {including boundary layer buildup) by using the same computer programs. Finally, the mixing zone downstream of the splitter plate was investigated to prescribe the wall contour correction necessary to ensure a constant-pressure test section.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis examines two problems concerned with surface effects in simple molecular systems. The first is the problem associated with the interaction of a fluid with a solid boundary, and the second originates from the interaction of a liquid with its own vapor.

For a fluid in contact with a solid wall, two sets of integro-differential equations, involving the molecular distribution functions of the system, are derived. One of these is a particular form of the well-known Bogolyubov-Born-Green-Kirkwood-Yvon equations. For the second set, the derivation, in contrast with the formulation of the B.B.G.K.Y. hierarchy, is independent of the pair-potential assumption. The density of the fluid, expressed as a power series in the uniform fluid density, is obtained by solving these equations under the requirement that the wall be ideal.

The liquid-vapor interface is analyzed with the aid of equations that describe the density and pair-correlation function. These equations are simplified and then solved by employing the superposition and the low vapor density approximations. The solutions are substituted into formulas for the surface energy and surface tension, and numerical results are obtained for selected systems. Finally, the liquid-vapor system near the critical point is examined by means of the lowest order B.B.G.K.Y. equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of transmission matrices and lumped parameter models for describing continuous systems is the subject of this study. Non-uniform continuous systems which play important roles in practical vibration problems, e.g., torsional oscillations in bars, transverse bending vibrations of beams, etc., are of primary importance.

A new approach for deriving closed form transmission matrices is applied to several classes of non-uniform continuous segments of one dimensional and beam systems. A power series expansion method is presented for determining approximate transmission matrices of any order for segments of non-uniform systems whose solutions cannot be found in closed form. This direct series method is shown to give results comparable to those of the improved lumped parameter models for one dimensional systems.

Four types of lumped parameter models are evaluated on the basis of the uniform continuous one dimensional system by comparing the behavior of the frequency root errors. The lumped parameter models which are based upon a close fit to the low frequency approximation of the exact transmission matrix, at the segment level, are shown to be superior. On this basis an improved lumped parameter model is recommended for approximating non-uniform segments. This new model is compared to a uniform segment approximation and error curves are presented for systems whose areas very quadratically and linearly. The effect of varying segment lengths is investigated for one dimensional systems and results indicate very little improvement in comparison to the use of equal length segments. For purposes of completeness, a brief summary of various lumped parameter models and other techniques which have previously been used to approximate the uniform Bernoulli-Euler beam is a given.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

If R is a ring with identity, let N(R) denote the Jacobson radical of R. R is local if R/N(R) is an artinian simple ring and ∩N(R)i = 0. It is known that if R is complete in the N(R)-adic topology then R is equal to (B)n, the full n by n matrix ring over B where E/N(E) is a division ring. The main results of the thesis deal with the structure of such rings B. In fact we have the following.

If B is a complete local algebra over F where B/N(B) is a finite dimensional normal extension of F and N(B) is finitely generated as a left ideal by k elements, then there exist automorphisms gi,...,gk of B/N(B) over F such that B is a homomorphic image of B/N[[x1,…,xk;g1,…,gk]] the power series ring over B/N(B) in noncommuting indeterminates xi, where xib = gi(b)xi for all b ϵ B/N.

Another theorem generalizes this result to complete local rings which have suitable commutative subrings. As a corollary of this we have the following. Let B be a complete local ring with B/N(B) a finite field. If N(B) is finitely generated as a left ideal by k elements then there exist automorphisms g1,…,gk of a v-ring V such that B is a homomorphic image of V [[x1,…,xk;g1,…,gk]].

In both these results it is essential to know the structure of N(B) as a two sided module over a suitable subring of B.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we are concerned with finding representations of the algebra of SU(3) vector and axial-vector charge densities at infinite momentum (the "current algebra") to describe the mesons, idealizing the real continua of multiparticle states as a series of discrete resonances of zero width. Such representations would describe the masses and quantum numbers of the mesons, the shapes of their Regge trajectories, their electromagnetic and weak form factors, and (approximately, through the PCAC hypothesis) pion emission or absorption amplitudes.

We assume that the mesons have internal degrees of freedom equivalent to being made of two quarks (one an antiquark) and look for models in which the mass is SU(3)-independent and the current is a sum of contributions from the individual quarks. Requiring that the current algebra, as well as conditions of relativistic invariance, be satisfied turns out to be very restrictive, and, in fact, no model has been found which satisfies all requirements and gives a reasonable mass spectrum. We show that using more general mass and current operators but keeping the same internal degrees of freedom will not make the problem any more solvable. In particular, in order for any two-quark solution to exist it must be possible to solve the "factorized SU(2) problem," in which the currents are isospin currents and are carried by only one of the component quarks (as in the K meson and its excited states).

In the free-quark model the currents at infinite momentum are found using a manifestly covariant formalism and are shown to satisfy the current algebra, but the mass spectrum is unrealistic. We then consider a pair of quarks bound by a potential, finding the current as a power series in 1/m where m is the quark mass. Here it is found impossible to satisfy the algebra and relativistic invariance with the type of potential tried, because the current contributions from the two quarks do not commute with each other to order 1/m3. However, it may be possible to solve the factorized SU(2) problem with this model.

The factorized problem can be solved exactly in the case where all mesons have the same mass, using a covariant formulation in terms of an internal Lorentz group. For a more realistic, nondegenerate mass there is difficulty in covariantly solving even the factorized problem; one model is described which almost works but appears to require particles of spacelike 4-momentum, which seem unphysical.

Although the search for a completely satisfactory model has been unsuccessful, the techniques used here might eventually reveal a working model. There is also a possibility of satisfying a weaker form of the current algebra with existing models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.