9 resultados para variance

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in optical techniques have enabled many breakthroughs in biology and medicine. However, light scattering by biological tissues remains a great obstacle, restricting the use of optical methods to thin ex vivo sections or superficial layers in vivo. In this thesis, we present two related methods that overcome the optical depth limit—digital time reversal of ultrasound encoded light (digital TRUE) and time reversal of variance-encoded light (TROVE). These two techniques share the same principle of using acousto-optic beacons for time reversal optical focusing within highly scattering media, like biological tissues. Ultrasound, unlike light, is not significantly scattered in soft biological tissues, allowing for ultrasound focusing. In addition, a fraction of the scattered optical wavefront that passes through the ultrasound focus gets frequency-shifted via the acousto-optic effect, essentially creating a virtual source of frequency-shifted light within the tissue. The scattered ultrasound-tagged wavefront can be selectively measured outside the tissue and time-reversed to converge at the location of the ultrasound focus, enabling optical focusing within deep tissues. In digital TRUE, we time reverse ultrasound-tagged light with an optoelectronic time reversal device (the digital optical phase conjugate mirror, DOPC). The use of the DOPC enables high optical gain, allowing for high intensity optical focusing and focal fluorescence imaging in thick tissues at a lateral resolution of 36 µm by 52 µm. The resolution of the TRUE approach is fundamentally limited to that of the wavelength of ultrasound. The ultrasound focus (~ tens of microns wide) usually contains hundreds to thousands of optical modes, such that the scattered wavefront measured is a linear combination of the contributions of all these optical modes. In TROVE, we make use of our ability to digitally record, analyze and manipulate the scattered wavefront to demix the contributions of these spatial modes using variance encoding. In essence, we encode each spatial mode inside the scattering sample with a unique variance, allowing us to computationally derive the time reversal wavefront that corresponds to a single optical mode. In doing so, we uncouple the system resolution from the size of the ultrasound focus, demonstrating optical focusing and imaging between highly diffusing samples at an unprecedented, speckle-scale lateral resolution of ~ 5 µm. Our methods open up the possibility of fully exploiting the prowess and versatility of biomedical optics in deep tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.

In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.

We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.

We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.

In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.

Finally, we discuss prospects for related future investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crustal structure in Southern California is investigated using travel times from over 200 stations and thousands of local earthquakes. The data are divided into two sets of first arrivals representing a two-layer crust. The Pg arrivals have paths that refract at depths near 10 km and the Pn arrivals refract along the Moho discontinuity. These data are used to find lateral and azimuthal refractor velocity variations and to determine refractor topography.

In Chapter 2 the Pn raypaths are modeled using linear inverse theory. This enables statistical verification that static delays, lateral slowness variations and anisotropy are all significant parameters. However, because of the inherent size limitations of inverse theory, the full array data set could not be processed and the possible resolution was limited. The tomographic backprojection algorithm developed for Chapters 3 and 4 avoids these size problems. This algorithm allows us to process the data sequentially and to iteratively refine the solution. The variance and resolution for tomography are determined empirically using synthetic structures.

The Pg results spectacularly image the San Andreas Fault, the Garlock Fault and the San Jacinto Fault. The Mojave has slower velocities near 6.0 km/s while the Peninsular Ranges have higher velocities of over 6.5 km/s. The San Jacinto block has velocities only slightly above the Mojave velocities. It may have overthrust Mojave rocks. Surprisingly, the Transverse Ranges are not apparent at Pg depths. The batholiths in these mountains are possibly only surficial.

Pn velocities are fast in the Mojave, slow in Southern California Peninsular Ranges and slow north of the Garlock Fault. Pn anisotropy of 2% with a NWW fast direction exists in Southern California. A region of thin crust (22 km) centers around the Colorado River where the crust bas undergone basin and range type extension. Station delays see the Ventura and Los Angeles Basins but not the Salton Trough, where high velocity rocks underlie the sediments. The Transverse Ranges have a root in their eastern half but not in their western half. The Southern Coast Ranges also have a thickened crust but the Peninsular Ranges have no major root.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to regulate gene expression is of central importance for the adaptability of living organisms to changes in their internal and external environment. At the transcriptional level, binding of transcription factors (TFs) in the vicinity of promoters can modulate the rate at which transcripts are produced, and as such play an important role in gene regulation. TFs with regulatory action at multiple promoters is the rule rather than the exception, with examples ranging from TFs like the cAMP receptor protein (CRP) in E. coli that regulates hundreds of different genes, to situations involving multiple copies of the same gene, such as on plasmids, or viral DNA. When the number of TFs heavily exceeds the number of binding sites, TF binding to each promoter can be regarded as independent. However, when the number of TF molecules is comparable to the number of binding sites, TF titration will result in coupling ("entanglement") between transcription of different genes. The last few decades have seen rapid advances in our ability to quantitatively measure such effects, which calls for biophysical models to explain these data. Here we develop a statistical mechanical model which takes the TF titration effect into account and use it to predict both the level of gene expression and the resulting correlation in transcription rates for a general set of promoters. To test these predictions experimentally, we create genetic constructs with known TF copy number, binding site affinities, and gene copy number; hence avoiding the need to use free fit parameters. Our results clearly prove the TF titration effect and that the statistical mechanical model can accurately predict the fold change in gene expression for the studied cases. We also generalize these experimental efforts to cover systems with multiple different genes, using the method of mRNA fluorescence in situ hybridization (FISH). Interestingly, we can use the TF titration affect as a tool to measure the plasmid copy number at different points in the cell cycle, as well as the plasmid copy number variance. Finally, we investigate the strategies of transcriptional regulation used in a real organism by analyzing the thousands of known regulatory interactions in E. coli. We introduce a "random promoter architecture model" to identify overrepresented regulatory strategies, such as TF pairs which coregulate the same genes more frequently than would be expected by chance, indicating a related biological function. Furthermore, we investigate whether promoter architecture has a systematic effect on gene expression by linking the regulatory data of E. coli to genome-wide expression censuses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.

In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work quantifies the nature of delays in genetic regulatory networks and their effect on system dynamics. It is known that a time lag can emerge from a sequence of biochemical reactions. Applying this modeling framework to the protein production processes, delay distributions are derived in a stochastic (probability density function) and deterministic setting (impulse function), whilst being shown to be equivalent under different assumptions. The dependence of the distribution properties on rate constants, gene length, and time-varying temperatures is investigated. Overall, the distribution of the delay in the context of protein production processes is shown to be highly dependent on the size of the genes and mRNA strands as well as the reaction rates. Results suggest longer genes have delay distributions with a smaller relative variance, and hence, less uncertainty in the completion times, however, they lead to larger delays. On the other hand large uncertainties may actually play a positive role, as broader distributions can lead to larger stability regions when this formalization of the protein production delays is incorporated into a feedback system.

Furthermore, evidence suggests that delays may play a role as an explicit design into existing controlling mechanisms. Accordingly, the reccurring dual-feedback motif is also investigated with delays incorporated into the feedback channels. The dual-delayed feedback is shown to have stabilizing effects through a control theoretic approach. Lastly, a distributed delay based controller design method is proposed as a potential design tool. In a preliminary study, the dual-delayed feedback system re-emerges as an effective controller design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.

In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part 1 of this thesis is about the 24 November, 1987, Superstition Hills earthquakes. The Superstition Hills earthquakes occurred in the western Imperial Valley in southern California. The earthquakes took place on a conjugate fault system consisting of the northwest-striking right-lateral Superstition Hills fault and a previously unknown Elmore Ranch fault, a northeast-striking left-lateral structure defined by surface rupture and a lineation of hypocenters. The earthquake sequence consisted of foreshocks, the M_s 6.2 first main shock, and aftershocks on the Elmore Ranch fault followed by the M_s 6.6 second main shock and aftershocks on the Superstition Hills fault. There was dramatic surface rupture along the Superstition Hills fault in three segments: the northern segment, the southern segment, and the Wienert fault.

In Chapter 2, M_L≥4.0 earthquakes from 1945 to 1971 that have Caltech catalog locations near the 1987 sequence are relocated. It is found that none of the relocated earthquakes occur on the southern segment of the Superstition Hills fault and many occur at the intersection of the Superstition Hills and Elmore Ranch faults. Also, some other northeast-striking faults may have been active during that time.

Chapter 3 discusses the Superstition Hills earthquake sequence using data from the Caltech-U.S.G.S. southern California seismic array. The earthquakes are relocated and their distribution correlated to the type and arrangement of the basement rocks. The larger earthquakes occur only where continental crystalline basement rocks are present. The northern segment of the Superstition Hills fault has more aftershocks than the southern segment.

An inversion of long period teleseismic data of the second mainshock of the 1987 sequence, along the Superstition Hills fault, is done in Chapter 4. Most of the long period seismic energy seen teleseismically is radiated from the southern segment of the Superstition Hills fault. The fault dip is near vertical along the northern segment of the fault and steeply southwest dipping along the southern segment of the fault.

Chapter 5 is a field study of slip and afterslip measurements made along the Superstition Hills fault following the second mainshock. Slip and afterslip measurements were started only two hours after the earthquake. In some locations, afterslip more than doubled the coseismic slip. The northern and southern segments of the Superstition Hills fault differ in the proportion of coseismic and postseismic slip to the total slip.

The northern segment of the Superstition Hills fault had more aftershocks, more historic earthquakes, released less teleseismic energy, and had a smaller proportion of afterslip to total slip than the southern segment. The boundary between the two segments lies at a step in the basement that separates a deeper metasedimentary basement to the south from a shallower crystalline basement to the north.

Part 2 of the thesis deals with the three-dimensional velocity structure of southern California. In Chapter 7, an a priori three-dimensional crustal velocity model is constructed by partitioning southern California into geologic provinces, with each province having a consistent one-dimensional velocity structure. The one-dimensional velocity structures of each region were then assembled into a three-dimensional model. The three-dimension model was calibrated by forward modeling of explosion travel times.

In Chapter 8, the three-dimensional velocity model is used to locate earthquakes. For about 1000 earthquakes relocated in the Los Angeles basin, the three-dimensional model has a variance of the the travel time residuals 47 per cent less than the catalog locations found using a standard one-dimensional velocity model. Other than the 1987 Whittier earthquake sequence, little correspondence is seen between these earthquake locations and elements of a recent structural cross section of the Los Angeles basin. The Whittier sequence involved rupture of a north dipping thrust fault bounded on at least one side by a strike-slip fault. The 1988 Pasadena earthquake was deep left-lateral event on the Raymond fault. The 1989 Montebello earthquake was a thrust event on a structure similar to that on which the Whittier earthquake occurred. The 1989 Malibu earthquake was a thrust or oblique slip event adjacent to the 1979 Malibu earthquake.

At least two of the largest recent thrust earthquakes (San Fernando and Whittier) in the Los Angeles basin have had the extent of their thrust plane ruptures limited by strike-slip faults. This suggests that the buried thrust faults underlying the Los Angeles basin are segmented by strike-slip faults.

Earthquake and explosion travel times are inverted for the three-dimensional velocity structure of southern California in Chapter 9. The inversion reduced the variance of the travel time residuals by 47 per cent compared to the starting model, a reparameterized version of the forward model of Chapter 7. The Los Angeles basin is well resolved, with seismically slow sediments atop a crust of granitic velocities. Moho depth is between 26 and 32 km.