11 resultados para weighted PageRank
em CaltechTHESIS
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
The Daya Bay Reactor Antineutrino Experiment observed the disappearance of reactor $\bar{\nu}_e$ from six $2.9~GW_{th}$ reactor cores in Daya Bay, China. The Experiment consists of six functionally identical $\bar{\nu}_e$ detectors, which detect $\bar{\nu}_e$ by inverse beta decay using a total of about 120 metric tons of Gd-loaded liquid scintillator as the target volume. These $\bar{\nu}_e$ detectors were installed in three underground experimental halls, two near halls and one far hall, under the mountains near Daya Bay, with overburdens of 250 m.w.e, 265 m.w.e and 860 m.w.e. and flux-weighted baselines of 470 m, 576 m and 1648 m. A total of 90179 $\bar{\nu}_e$ candidates were observed in the six detectors over a period of 55 days, 57549 at the Daya Bay near site, 22169 at the Ling Ao near site and 10461 at the far site. By performing a rate-only analysis, the value of $sin^2 2\theta_{13}$ was determined to be $0.092 \pm 0.017$.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
This thesis is comprised of three chapters, each of which is concerned with properties of allocational mechanisms which include voting procedures as part of their operation. The theme of interaction between economic and political forces recurs in the three chapters, as described below.
Chapter One demonstrates existence of a non-controlling interest shareholders' equilibrium for a stylized one-period stock market economy with fewer securities than states of the world. The economy has two decision mechanisms: Owners vote to change firms' production plans across states, fixing shareholdings; and individuals trade shares and the current production / consumption good, fixing production plans. A shareholders' equilibrium is a production plan profile, and a shares / current good allocation stable for both mechanisms. In equilibrium, no (Kramer direction-restricted) plan revision is supported by a share-weighted majority, and there exists no Pareto superior reallocation.
Chapter Two addresses efficient management of stationary-site, fixed-budget, partisan voter registration drives. Sufficient conditions obtain for unique optimal registrar deployment within contested districts. Each census tract is assigned an expected net plurality return to registration investment index, computed from estimates of registration, partisanship, and turnout. Optimum registration intensity is a logarithmic transformation of a tract's index. These conditions are tested using a merged data set including both census variables and Los Angeles County Registrar data from several 1984 Assembly registration drives. Marginal registration spending benefits, registrar compensation, and the general campaign problem are also discussed.
The last chapter considers social decision procedures at a higher level of abstraction. Chapter Three analyzes the structure of decisive coalition families, given a quasitransitive-valued social decision procedure satisfying the universal domain and ITA axioms. By identifying those alternatives X* ⊆ X on which the Pareto principle fails, imposition in the social ranking is characterized. Every coaliton is weakly decisive for X* over X~X*, and weakly antidecisive for X~X* over X*; therefore, alternatives in X~X* are never socially ranked above X*. Repeated filtering of alternatives causing Pareto failure shows states in X^n*~X^((n+1))* are never socially ranked above X^((n+1))*. Limiting results of iterated application of the *-operator are also discussed.
Resumo:
Isotope dilution thorium and uranium analyses of the Harleton chondrite show a larger scatter than previously observed in equilibrated ordinary chondrites (EOC). The linear correlation of Th/U with 1/U in Harleton (and all EOC data) is produced by variation in the chlorapatite to merrillite mixing ratio. Apatite variations control the U concentrations. Phosphorus variations are compensated by inverse variations in U to preserve the Th/U vs. 1/U correlation. Because the Th/U variations reflect phosphate ampling, a weighted Th/U average should converge to an improved solar system Th/U. We obtain Th/U=3.53 (1-mean=0.10), significantly lower and more precise than previous estimates.
To test whether apatite also produces Th/U variation in CI and CM chondrites, we performed P analyses on the solutions from leaching experiments of Orgueil and Murchison meteorites.
A linear Th/U vs. 1/U correlation in CI can be explained by redistribution of hexavalent U by aqueous fluids into carbonates and sulfates.
Unlike CI and EOC, whole rock Th/U variations in CMs are mostly due to Th variations. A Th/U vs. 1/U linear correlation suggested by previous data for CMs is not real. We distinguish 4 components responsible for the whole rock Th/U variations: (1) P and actinide-depleted matrix containing small amounts of U-rich carbonate/sulfate phases (similar to CIs); (2) CAIs and (3) chondrules are major reservoirs for actinides, (4) an easily leachable phase of high Th/U. likely carbonate produced by CAI alteration. Phosphates play a minor role as actinide and P carrier phases in CM chondrites.
Using our Th/U and minimum galactic ages from halo globular clusters, we calculate relative supernovae production rates for 232Th/238U and 235U/238U for different models of r-process nucleosynthesis. For uniform galactic production, the beginning of the r-process nucleosynthesis must be less than 13 Gyr. Exponentially decreasing production is also consistent with a 13 Gyr age, but very slow decay times are required (less than 35 Gyr), approaching the uniform production. The 15 Gyr Galaxy requires either a fast initial production growth (infall time constant less than 0.5 Gyr) followed by very low decrease (decay time constant greater than 100 Gyr), or the fastest possible decrease (≈8 Gyr) preceded by slow in fall (≈7.5 Gyr).
Resumo:
This thesis introduces new tools for geometric discretization in computer graphics and computational physics. Our work builds upon the duality between weighted triangulations and power diagrams to provide concise, yet expressive discretization of manifolds and differential operators. Our exposition begins with a review of the construction of power diagrams, followed by novel optimization procedures to fully control the local volume and spatial distribution of power cells. Based on this power diagram framework, we develop a new family of discrete differential operators, an effective stippling algorithm, as well as a new fluid solver for Lagrangian particles. We then turn our attention to applications in geometry processing. We show that orthogonal primal-dual meshes augment the notion of local metric in non-flat discrete surfaces. In particular, we introduce a reduced set of coordinates for the construction of orthogonal primal-dual structures of arbitrary topology, and provide alternative metric characterizations through convex optimizations. We finally leverage these novel theoretical contributions to generate well-centered primal-dual meshes, sphere packing on surfaces, and self-supporting triangulations.
Resumo:
This thesis is a theoretical work on the space-time dynamic behavior of a nuclear reactor without feedback. Diffusion theory with G-energy groups is used.
In the first part the accuracy of the point kinetics (lumped-parameter description) model is examined. The fundamental approximation of this model is the splitting of the neutron density into a product of a known function of space and an unknown function of time; then the properties of the system can be averaged in space through the use of appropriate weighting functions; as a result a set of ordinary differential equations is obtained for the description of time behavior. It is clear that changes of the shape of the neutron-density distribution due to space-dependent perturbations are neglected. This results to an error in the eigenvalues and it is to this error that bounds are derived. This is done by using the method of weighted residuals to reduce the original eigenvalue problem to that of a real asymmetric matrix. Then Gershgorin-type theorems .are used to find discs in the complex plane in which the eigenvalues are contained. The radii of the discs depend on the perturbation in a simple manner.
In the second part the effect of delayed neutrons on the eigenvalues of the group-diffusion operator is examined. The delayed neutrons cause a shifting of the prompt-neutron eigenvalue s and the appearance of the delayed eigenvalues. Using a simple perturbation method this shifting is calculated and the delayed eigenvalues are predicted with good accuracy.
Resumo:
The σD values of nitrated cellulose from a variety of trees covering a wide geographic range have been measured. These measurements have been used to ascertain which factors are likely to cause σD variations in cellulose C-H hydrogen.
It is found that a primary source of tree σD variation is the σD variation of the environmental precipitation. Superimposed on this are isotopic variations caused by the transpiration of the leaf water incorporated by the tree. The magnitude of this transpiration effect appears to be related to relative humidity.
Within a single tree, it is found that the hydrogen isotope variations which occur for a ring sequence in one radial direction may not be exactly the same as those which occur in a different direction. Such heterogeneities appear most likely to occur in trees with asymmetric ring patterns that contain reaction wood. In the absence of reaction wood such heterogeneities do not seem to occur. Thus, hydrogen isotope analyses of tree ring sequences should be performed on trees which do not contain reaction wood.
Comparisons of tree σD variations with variations in local climate are performed on two levels: spatial and temporal. It is found that the σD values of 20 North American trees from a wide geographic range are reasonably well-correlated with the corresponding average annual temperature. The correlation is similar to that observed for a comparison of the σD values of annual precipitation of 11 North American sites with annual temperature. However, it appears that this correlation is significantly disrupted by trees which grew on poorly drained sites such as those in stagnant marshes. Therefore, site selection may be important in choosing trees for climatic interpretation of σD values, although proper sites do not seem to be uncommon.
The measurement of σD values in 5-year samples from the tree ring sequences of 13 trees from 11 North American sites reveals a variety of relationships with local climate. As it was for the spatial σD vs climate comparison, site selection is also apparently important for temporal tree σD vs climate comparisons. Again, it seems that poorly-drained sites are to be avoided. For nine trees from different "well-behaved" sites, it was found that the local climatic variable best related to the σD variations was not the same for all sites.
Two of these trees showed a strong negative correlation with the amount of local summer precipitation. Consideration of factors likely to influence the isotopic composition of summer rain suggests that rainfall intensity may be important. The higher the intensity, the lower the σD value. Such an effect might explain the negative correlation of σD vs summer precipitation amount for these two trees. A third tree also exhibited a strong correlation with summer climate, but in this instance it was a positive correlation of σD with summer temperature.
The remaining six trees exhibited the best correlation between σD values and local annual climate. However, in none of these six cases was it annual temperature that was the most important variable. In fact annual temperature commonly showed no relationship at all with tree σD values. Instead, it was found that a simple mass balance model incorporating two basic assumptions yielded parameters which produced the best relationships with tree σD values. First, it was assumed that the σD values of these six trees reflected the σD values of annual precipitation incorporated by these trees. Second, it was assumed that the σD value of the annual precipitation was a weighted average of two seasonal isotopic components: summer and winter. Mass balance equations derived from these assumptions yielded combinations of variables that commonly showed a relationship with tree σD values where none had previously been discerned.
It was found for these "well-behaved" trees that not all sample intervals in a σD vs local climate plot fell along a well-defined trend. These departures from the local σD VS climate norm were defined as "anomalous". Some of these anomalous intervals were common to trees from different locales. When such widespread commonalty of an anomalous interval occurred, it was observed that the interval corresponded to an interval in which drought had existed in the North American Great Plains.
Consequently, there appears to be a combination of both local and large scale climatic information in the σD variations of tree cellulose C-H hydrogen.
Resumo:
Thermoelectric materials have demanded a significant amount of attention for their ability to convert waste heat directly to electricity with no moving parts. A resurgence in thermoelectrics research has led to significant enhancements in the thermoelectric figure of merit, zT, even for materials that were already well studied. This thesis approaches thermoelectric zT optimization by developing a detailed understanding of the electronic structure using a combination of electronic/thermoelectric properties, optical properties, and ab-initio computed electronic band structures. This is accomplished by applying these techniques to three important classes of thermoelectric materials: IV-VI materials (the lead chalcogenides), Half-Heusler’s (XNiSn where X=Zr, Ti, Hf), and CoSb3 skutterudites.
In the IV-VI materials (PbTe, PbSe, PbS) I present a shifting temperature-dependent optical absorption edge which correlates well to the computed ab-initio molecular dynamics result. Contrary to prior literature that suggests convergence of the primary and secondary bands at 400 K, I suggest a higher convergence temperature of 700, 900, and 1000 K for PbTe, PbSe, and PbS, respectively. This finding can help guide electronic properties modelling by providing a concrete value for the band gap and valence band offset as a function of temperature.
Another important thermoelectric material, ZrNiSn (half-Heusler), is analyzed for both its optical and electronic properties; transport properties indicate a largely different band gap depending on whether the material is doped n-type or p-type. By measuring and reporting the optical band gap value of 0.13 eV, I resolve the discrepancy in the gap calculated from electronic properties (maximum Seebeck and resistivity) by correlating these estimates to the electron-to-hole weighted mobility ratio, A, in narrow gap materials (A is found to be approximately 5.0 in ZrNiSn).
I also show that CoSb3 contains multiple conduction bands that contribute to the thermoelectric properties. These bands are also observed to shift towards each other with temperature, eventually reaching effective convergence for T>500 K. This implies that the electronic structure in CoSb3 is critically important (and possibly engineerable) with regards to its high thermoelectric figure of merit.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.
The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.