11 resultados para ideal preemptive distribution

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let F = Ǫ(ζ + ζ –1) be the maximal real subfield of the cyclotomic field Ǫ(ζ) where ζ is a primitive qth root of unity and q is an odd rational prime. The numbers u1=-1, uk=(ζk-k)/(ζ-ζ-1), k=2,…,p, p=(q-1)/2, are units in F and are called the cyclotomic units. In this thesis the sign distribution of the conjugates in F of the cyclotomic units is studied.

Let G(F/Ǫ) denote the Galoi's group of F over Ǫ, and let V denote the units in F. For each σϵ G(F/Ǫ) and μϵV define a mapping sgnσ: V→GF(2) by sgnσ(μ) = 1 iff σ(μ) ˂ 0 and sgnσ(μ) = 0 iff σ(μ) ˃ 0. Let {σ1, ... , σp} be a fixed ordering of G(F/Ǫ). The matrix Mq=(sgnσj(vi) ) , i, j = 1, ... , p is called the matrix of cyclotomic signatures. The rank of this matrix determines the sign distribution of the conjugates of the cyclotomic units. The matrix of cyclotomic signatures is associated with an ideal in the ring GF(2) [x] / (xp+ 1) in such a way that the rank of the matrix equals the GF(2)-dimension of the ideal. It is shown that if p = (q-1)/ 2 is a prime and if 2 is a primitive root mod p, then Mq is non-singular. Also let p be arbitrary, let ℓ be a primitive root mod q and let L = {i | 0 ≤ i ≤ p-1, the least positive residue of defined by ℓi mod q is greater than p}. Let Hq(x) ϵ GF(2)[x] be defined by Hq(x) = g. c. d. ((Σ xi/I ϵ L) (x+1) + 1, xp + 1). It is shown that the rank of Mq equals the difference p - degree Hq(x).

Further results are obtained by using the reciprocity theorem of class field theory. The reciprocity maps for a certain abelian extension of F and for the infinite primes in F are associated with the signs of conjugates. The product formula for the reciprocity maps is used to associate the signs of conjugates with the reciprocity maps at the primes which lie above (2). The case when (2) is a prime in F is studied in detail. Let T denote the group of totally positive units in F. Let U be the group generated by the cyclotomic units. Assume that (2) is a prime in F and that p is odd. Let F(2) denote the completion of F at (2) and let V(2) denote the units in F(2). The following statements are shown to be equivalent. 1) The matrix of cyclotomic signatures is non-singular. 2) U∩T = U2. 3) U∩F2(2) = U2. 4) V(2)/ V(2)2 = ˂v1 V(2)2˃ ʘ…ʘ˂vp V(2)2˃ ʘ ˂3V(2)2˃.

The rank of Mq was computed for 5≤q≤929 and the results appear in tables. On the basis of these results and additional calculations the following conjecture is made: If q and p = (q -1)/ 2 are both primes, then Mq is non-singular.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Six topics in incompressible, inviscid fluid flow involving vortex motion are presented. The stability of the unsteady flow field due to the vortex filament expanding under the influence of an axial compression is examined in the first chapter as a possible model of the vortex bursting observed in aircraft contrails. The filament with a stagnant core is found to be unstable to axisymmetric disturbances. For initial disturbances with the form of axisymmetric Kelvin waves, the filament with a uniformly rotating core is neutrally stable, but the compression causes the disturbance to undergo a rapid increase in amplitude. The time at which the increase occurs is, however, later than the observed bursting times, indicating the bursting phenomenon is not caused by this type of instability.

In the second and third chapters the stability of a steady vortex filament deformed by two-dimensional strain and shear flows, respectively, is examined. The steady deformations are in the plane of the vortex cross-section. Disturbances which deform the filament centerline into a wave which does not propagate along the filament are shown to be unstable and a method is described to calculate the wave number and corresponding growth rate of the amplified waves for a general distribution of vorticity in the vortex core.

In Chapter Four exact solutions are constructed for two-dimensional potential flow over a wing with a free ideal vortex standing over the wing. The loci of positions of the free vortex are found and the lift is calculated. It is found that the lift on the wing can be significantly increased by the free vortex.

The two-dimensional trajectories of an ideal vortex pair near an orifice are calculated in Chapter Five. Three geometries are examined, and the criteria for the vortices to travel away from the orifice are determined.

Finally, Chapter Six reproduces completely the paper, "Structure of a linear array of hollow vortices of finite cross-section," co-authored with G. R. Baker and P. G. Saffman. Free streamline theory is employed to construct an exact steady solution for a linear array of hollow, or stagnant cored vortices. If each vortex has area A and the separation is L, then there are two possible shapes if A^(1/2)/L is less than 0.38 and none if it is larger. The stability of the shapes to two-dimensional, periodic and symmetric disturbances is considered for hollow vortices. The more deformed of the two possible shapes is found to be unstable, while the less deformed shape is stable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.

We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.

The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield

ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV

The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.

A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives

ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV

Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.

Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.

We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.

We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Long paleoseismic histories are necessary for understanding the full range of behavior of faults, as the most destructive events often have recurrence intervals longer than local recorded history. The Sunda megathrust, the interface along which the Australian plate subducts beneath Southeast Asia, provides an ideal natural laboratory for determining a detailed paleoseismic history over many seismic cycles. The outer-arc islands above the seismogenic portion of the megathrust cyclically rise and subside in response to processes on the underlying megathrust, providing uncommonly good illumination of megathrust behavior. Furthermore, the growth histories of coral microatolls, which record tectonic uplift and subsidence via relative sea level, can be used to investigate the detailed coseismic and interseismic deformation patterns. One particularly interesting area is the Mentawai segment of the megathrust, which has been shown to characteristically fail in a series of ruptures over decades, rather than a single end-to-end rupture. This behavior has been termed a seismic “supercycle.” Prior to the current rupture sequence, which began in 2007, the segment previously ruptured during the 14th century, the late 16th to late 17th century, and most recently during historical earthquakes in 1797 and 1833. In this study, we examine each of these previous supercycles in turn.

First, we expand upon previous analysis of the 1797–1833 rupture sequence with a comprehensive review of previously published coral microatoll data and the addition of a significant amount of new data. We present detailed maps of coseismic uplift during the two great earthquakes and of interseismic deformation during the periods 1755–1833 and 1950–1997 and models of the corresponding slip and coupling on the underlying megathrust. We derive magnitudes of Mw 8.7–9.0 for the two historical earthquakes, and determine that the 1797 earthquake fundamentally changed the state of coupling on the fault for decades afterward. We conclude that while major earthquakes generally do not involve rupture of the entire Mentawai segment, they undoubtedly influence the progression of subsequent ruptures, even beyond their own rupture area. This concept is of vital importance for monitoring and forecasting the progression of the modern rupture sequence.

Turning our attention to the 14th century, we present evidence of a shallow slip event in approximately A.D. 1314, which preceded the “conventional” megathrust rupture sequence. We calculate a suite of slip models, slightly deeper and/or larger than the 2010 Pagai Islands earthquake, that are consistent with the large amount of subsidence recorded at our study site. Sea-level records from older coral microatolls suggest that these events occur at least once every millennium, but likely far less frequently than their great downdip neighbors. The revelation that shallow slip events are important contributors to the seismic cycle of the Mentawai segment further complicates our understanding of this subduction megathrust and our assessment of the region’s exposure to seismic and tsunami hazards.

Finally, we present an outline of the complex intervening rupture sequence that took place in the 16th and 17th centuries, which involved at least five distinct uplift events. We conclude that each of the supercycles had unique features, and all of the types of fault behavior we observe are consistent with highly heterogeneous frictional properties of the megathrust beneath the south-central Mentawai Islands. We conclude that the heterogeneous distribution of asperities produces terminations and overlap zones between fault ruptures, resulting in the seismic “supercycle” phenomenon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A research program was designed (1) to map regional lithological units of the lunar surface based on measurements of spatial variations in spectral reflectance, and, (2) to establish the sequence of the formation of such lithological units from measurements of the accumulated affects of impacting bodies.

Spectral reflectance data were obtained by scanning luminance variations over the lunar surface at three wavelengths (0.4µ, 0.52µ, and 0.7µ). These luminance measurements were reduced to normalized spectral reflectance values relative to a standard area in More Serenitotis. The spectral type of each lunar area was identified from the shape of its reflectance spectrum. From these data lithological units or regions of constant color were identified. The maria fall into two major spectral classes: circular moria like More Serenitotis contain S-type or red material and thin, irregular, expansive maria like Mare Tranquillitatis contain T-type or blue material. Four distinct subtypes of S-type reflectances and two of T-type reflectances exist. As these six subtypes occur in a number of lunar regions, it is concluded that they represent specific types of material rather than some homologous set of a few end members.

The relative ages or sequence of formation of these more units were established from measurements of the accumulated impacts which have occurred since more formation. A model was developed which relates the integrated flux of particles which hove impacted a surface to the distribution of craters as functions of size and shape. Erosion of craters is caused chiefly by small bodies which produce negligible individual changes in crater shape. Hence the shape of a crater can be used to estimate the total number of small impacts that have occurred since the crater was formed. Relative ages of a surface can then be obtained from measurements of the slopes of the walls of the oldest craters formed on the surface. The results show that different maria and regions within them were emplaced at different times. An approximate absolute time scale was derived from Apollo 11 crystallization ages under an assumption of a constant rote of impacting for the last 4 x 10^9 yrs. Assuming, constant flux, the period of mare formation lasted from over 4 x 10^9 yrs to about 1.5 x 10^9 yrs ago.

A synthesis of the results of relative age measurements and of spectral reflectance mapping shows that (1) the formation of the lunar maria occurred in three stages; material of only one spectral type was deposited in each stage, (2) two distinct kinds of maria exist, each type distinguished by morphology, structure, gravity anomalies, time of formation, and spectral reflectance type, and (3) individual maria have complicated histories; they contain a variety of lithic units emplaced at different times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines two problems concerned with surface effects in simple molecular systems. The first is the problem associated with the interaction of a fluid with a solid boundary, and the second originates from the interaction of a liquid with its own vapor.

For a fluid in contact with a solid wall, two sets of integro-differential equations, involving the molecular distribution functions of the system, are derived. One of these is a particular form of the well-known Bogolyubov-Born-Green-Kirkwood-Yvon equations. For the second set, the derivation, in contrast with the formulation of the B.B.G.K.Y. hierarchy, is independent of the pair-potential assumption. The density of the fluid, expressed as a power series in the uniform fluid density, is obtained by solving these equations under the requirement that the wall be ideal.

The liquid-vapor interface is analyzed with the aid of equations that describe the density and pair-correlation function. These equations are simplified and then solved by employing the superposition and the low vapor density approximations. The solutions are substituted into formulas for the surface energy and surface tension, and numerical results are obtained for selected systems. Finally, the liquid-vapor system near the critical point is examined by means of the lowest order B.B.G.K.Y. equation.