17 resultados para Problem Resolution

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The resolution of the so-called thermodynamic paradox is presented in this paper. It is shown, in direct contradiction to the results of several previously published papers, that the cutoff modes (evanescent modes having complex propagation constants) can carry power in a waveguide containing ferrite. The errors in all previous “proofs” which purport to show that the cutoff modes cannot carry power are uncovered. The boundary value problem underlying the paradox is studied in detail; it is shown that, although the solution is somewhat complicated, there is nothing paradoxical about it.

The general problem of electromagnetic wave propagation through rectangular guides filled inhomogeneously in cross-section with transversely magnetized ferrite is also studied. Application of the standard waveguide techniques reduces the TM part to the well-known self-adjoint Sturm Liouville eigenvalue equation. The TE part, however, leads in general to a non-self-adjoint eigenvalue equation. This equation and the associated expansion problem are studied in detail. Expansion coefficients and actual fields are determined for a particular problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract to Part I

The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.

Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.

No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.

Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.

Abstract to Part II

Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:

i) the mean exit time

ii) the phase-space distribution of exit locations.

When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.

Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.

The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measuring electrical activity in large numbers of cells with high spatial and temporal resolution is a fundamental problem for the study of neural development and information processing. To address this problem, we have constructed FlaSh: a novel, genetically-encoded probe that can be used to measure trans-membrane voltage in single cells. We fused a modified green fluorescent protein (GFP) into a voltage-sensitive potassium channel so that voltage dependent rearrangements in the potassium channel induce changes in the fluorescence of GFP. A voltage sensor encoded into DNA has the advantage that it may be introduced into an organism non-invasively and targeted to specific developmental stages, brain regions, cell types, and sub-cellular compartments.

We also describe modifications to FlaSh that shift its color, kinetics, and dynamic range. We used multiple green fluorescent proteins to produce variants of the FlaSh sensor that generate ratiometric signal output via fluorescence resonance energy transfer (FRET). Finally, we describe initial work toward FlaSh variants that are sensitive to G-protein coupled receptor (GPCR) activation. These sensors can be used to design functional assays for receptor activation in living cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the following singularly perturbed linear two-point boundary-value problem:

Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)

By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)

Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.

A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.

Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The majority of young, low-mass stars are surrounded by optically thick accretion disks. These circumstellar disks provide large reservoirs of gas and dust that will eventually be transformed into planetary systems. Theory and observations suggest that the earliest stage toward planet formation in a protoplanetary disk is the growth of particles, from sub-micron-sized grains to centimeter- sized pebbles. Theory indicates that small interstellar grains are well coupled into the gas and are incorporated to the disk during the proto-stellar collapse. These dust particles settle toward the disk mid-plane and simultaneously grow through collisional coagulation in a very short timescale. Observationally, grain growth can be inferred by measuring the spectral energy distribution at long wavelengths, which traces the continuum dust emission spectrum and hence the dust opacity. Several observational studies have indicated that the dust component in protoplanetary disks has evolved as compared to interstellar medium dust particles, suggesting at least 4 orders of magnitude in particle- size growth. However, the limited angular resolution and poor sensitivity of previous observations has not allowed for further exploration of this astrophysical process.

As part of my thesis, I embarked in an observational program to search for evidence of radial variations in the dust properties across a protoplanetary disk, which may be indicative of grain growth. By making use of high angular resolution observations obtained with CARMA, VLA, and SMA, I searched for radial variations in the dust opacity inside protoplanetary disks. These observations span more than an order of magnitude in wavelength (from sub-millimeter to centimeter wavelengths) and attain spatial resolutions down to 20 AU. I characterized the radial distribution of the circumstellar material and constrained radial variations of the dust opacity spectral index, which may originate from particle growth in these circumstellar disks. Furthermore, I compared these observational constraints with simple physical models of grain evolution that include collisional coagulation, fragmentation, and the interaction of these grains with the gaseous disk (the radial drift problem). For the parameters explored, these observational constraints are in agreement with a population of grains limited in size by radial drift. Finally, I also discuss future endeavors with forthcoming ALMA observations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.

However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.

This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protein structure prediction has remained a major challenge in structural biology for more than half a century. Accelerated and cost efficient sequencing technologies have allowed researchers to sequence new organisms and discover new protein sequences. Novel protein structure prediction technologies will allow researchers to study the structure of proteins and to determine their roles in the underlying biology processes and develop novel therapeutics.

Difficulty of the problem stems from two folds: (a) describing the energy landscape that corresponds to the protein structure, commonly referred to as force field problem; and (b) sampling of the energy landscape, trying to find the lowest energy configuration that is hypothesized to be the native state of the structure in solution. The two problems are interweaved and they have to be solved simultaneously. This thesis is composed of three major contributions. In the first chapter we describe a novel high-resolution protein structure refinement algorithm called GRID. In the second chapter we present REMCGRID, an algorithm for generation of low energy decoy sets. In the third chapter, we present a machine learning approach to ranking decoys by incorporating coarse-grain features of protein structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The final object of this research was to prepare m-nitrobenzoyl malic acid and to separate it, if possible, into the four stereoisomers predicted by the Huggins' theory of the benzene ring. Inasmuch as the quantity of m-nitro- benzoyl chloride available was limited it was thought better to first prepare i-benzoyl malic acid and then attempt to resolve it. The resolution of m-nitrobenzoyl malic acid could probably be accomplished by a similar method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Techniques are described for mounting and visualizing biological macromolecules for high resolution electron microscopy. Standard techniques are included in a discussion of new methods designed to provide the highest structural resolution. Methods are also discussed for handling samples on the grid, for making accurate size measurements at the 20 Å level, and for photographically enhancing image contrast.

The application of these techniques to the study of the binding of DNA polymerase to DNA is described. It is shown that the electron micrographs of this material are in agreement with the model proposed by Dr. Arthur Kornberg. A model is described which locates several active sites on the enzyme.

The chromosomal material of the protozoan tetrahymena has been isolated and characterized by biochemical techniques and by electron microscopy. This material is shown to be typical of chromatin of higher creatures.

Comparison with other chromatins discloses that the genome of tetrahymena is highly template active and has a relatively simple genetic construction.

High resolution electron microscope procedures developed in this work have been combined with standard biochemical techniques to give a comprehensive picture of the structure of interphase chromosome fibers. The distribution of the chromosomal proteins along its DNA is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methods of filtering an n.m.r. spectrum which can improve the resolution by as much as a factor of ten are examined. They include linear filters based upon an information theory approach and non-linear filters based upon a statistical approach. The appropriate filter is determined by the nature of the problem. Once programmed on a digital computer they are both simple to use.

These filters are applied to some examples from 13C and 15N n.m.r. spectra.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No abstract.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.

The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.

The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.

The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.

If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Everett interpretation of quantum mechanics is an increasingly popular alternative to the traditional Copenhagen interpretation, but there are a few major issues that prevent the widespread adoption. One of these issues is the origin of probabilities in the Everett interpretation, which this thesis will attempt to survey. The most successful resolution of the probability problem thus far is the decision-theoretic program, which attempts to frame probabilities as outcomes of rational decision making. This marks a departure from orthodox interpretations of probabilities in the physical sciences, where probabilities are thought to be objective, stemming from symmetry considerations. This thesis will attempt to offer evaluations on the decision-theoretic program.