4 resultados para low degree graph
em CaltechTHESIS
Resumo:
Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.
The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.
In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.
Resumo:
The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.
Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.
Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.