10 resultados para FORMULATIONS

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The properties of capillary-gravity waves of permanent form on deep water are studied. Two different formulations to the problem are given. The theory of simple bifurcation is reviewed. For small amplitude waves a formal perturbation series is used. The Wilton ripple phenomenon is reexamined and shown to be associated with a bifurcation in which a wave of permanent form can double its period. It is shown further that Wilton's ripples are a special case of a more general phenomenon in which bifurcation into subharmonics and factorial higher harmonics can occur. Numerical procedures for the calculation of waves of finite amplitude are developed. Bifurcation and limit lines are calculated. Pure and combination waves are continued to maximum amplitude. It is found that the height is limited in all cases by the surface enclosing one or more bubbles. Results for the shape of gravity waves are obtained by solving an integra-differential equation. It is found that the family of solutions giving the waveheight or equivalent parameter has bifurcation points. Two bifurcation points and the branches emanating from them are found specifically, corresponding to a doubling and tripling of the wavelength. Solutions on the new branches are calculated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.

In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.

N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let F(θ) be a separable extension of degree n of a field F. Let Δ and D be integral domains with quotient fields F(θ) and F respectively. Assume that Δ D. A mapping φ of Δ into the n x n D matrices is called a Δ/D rep if (i) it is a ring isomorphism and (ii) it maps d onto dIn whenever d ϵ D. If the matrices are also symmetric, φ is a Δ/D symrep.

Every Δ/D rep can be extended uniquely to an F(θ)/F rep. This extension is completely determined by the image of θ. Two Δ/D reps are called equivalent if the images of θ differ by a D unimodular similarity. There is a one-to-one correspondence between classes of Δ/D reps and classes of Δ ideals having an n element basis over D.

The condition that a given Δ/D rep class contain a Δ/D symrep can be phrased in various ways. Using these formulations it is possible to (i) bound the number of symreps in a given class, (ii) count the number of symreps if F is finite, (iii) establish the existence of an F(θ)/F symrep when n is odd, F is an algebraic number field, and F(θ) is totally real if F is formally real (for n = 3 see Sapiro, “Characteristic polynomials of symmetric matrices” Sibirsk. Mat. Ž. 3 (1962) pp. 280-291), and (iv) study the case D = Z, the integers (see Taussky, “On matrix classes corresponding to an ideal and its inverse” Illinois J. Math. 1 (1957) pp. 108-113 and Faddeev, “On the characteristic equations of rational symmetric matrices” Dokl. Akad. Nauk SSSR 58 (1947) pp. 753-754).

The case D = Z and n = 2 is studied in detail. Let Δ’ be an integral domain also having quotient field F(θ) and such that Δ’ Δ. Let φ be a Δ/Z symrep. A method is given for finding a Δ’/Z symrep ʘ such that the Δ’ ideal class corresponding to the class of ʘ is an extension to Δ’ of the Δ ideal class corresponding to the class of φ. The problem of finding all Δ/Z symreps equivalent to a given one is studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have sought to determine the nature of the free-radical precursors to ring-opened hydrocarbon 5 and ring-closed hydrocarbon 6. Reasonable alternative formulations involve the postulation of hydrogen abstraction (a) by a pair of rapidly equilibrating classical radicals (the ring-opened allylcarbinyl-type radical 3 and the ring-closed cyclopropylcarbinyl-type 4), or (b) by a nonclassical radical such as homoallylic radical 7.

[Figure not reproduced.]

Entry to the radical system is gained via degassed thermal decomposition of peresters having the ring-opened and the ring-closed structures. The ratio of 6:5 is essentially independent of the hydrogen donor concentration for decomposition of the former at 125° in the presence of triethyltin hydrdride. A deuterium labeling study showed that the α and β methylene groups in 3 (or the equivalent) are rapidly interchanged under these conditions.

Existence of two (or more) product-forming intermediates is indicated (a) by dependence of the ratio 6:5 on the tin hydride concentration for decomposition of the ring-closed perester at 10 and 35°, and (b) by formation of cage products having largely or wholly the structure (ring-opened or ring-closed) of the starting perester.

Relative rates of hydrogen abstraction by 3 could be inferred by comparison of ratios of rate constants for hydrogen abstraction and ortho-ring cyclization:

[Figure not reproduced.]

At 100° values of ka/kr are 0.14 for hydrogen abstraction from 1,4-cyclohexadiene and 7 for abstraction from triethyltin hydride. The ratio 6:5 at the same temperature is ~0.0035 for hydrogen abstraction from 1,4-cyclohexadiene, ~0.078 for abstraction from the tin hydride, and ≥ 5 for abstraction from cyclohexadienyl radicals. These data indicate that abstraction of hydrogen from triethyltin hydride is more rapid than from 1,4-cyclohexadiene by a factor of ~1000 for 4, but only ~50 for 3.

Measurements of product ratios at several temperatures allowed the construction of an approximate energy-level scheme. A major inference is that isomerization of 3 to 4 is exothermic by 8 ± 3 kcal/mole, in good agreement with expectations based on bond dissociation energies. Absolute rate-constant estimates are also given.

The results are nicely compatible with a classical-radical mechanism, but attempted interpretation in terms of a nonclassical radical precursor of product ratios formed even from equilibrated radical intermediates leads, it is argued, to serious difficulties.

The roles played by hydrogen abstraction from 1,4,-cyclohexadiene and from the derived cyclohexadienyl radicals were probed by fitting observed ratios of 6:5 and 5:10 in the sense of least-squares to expressions derived for a complex mechanistic scheme. Some 30 to 40 measurements on each product ratio, obtained under a variety of experimental conditions, could be fit with an average deviation of ~6%. Significant systematic deviations were found, but these could largely be redressed by assuming (a) that the rate constant for reaction of 4 with cyclohexadienyl radical is inversely proportional to the viscosity of the medium (i.e., is diffusion-controlled), and (b) that ka/kr for hydrogen abstraction from 1,4-cyclohexadiene depends slightly on the composition of the medium. An average deviation of 4.4% was thereby attained.

Degassed thermal decomposition of the ring-opened perester in the presence of the triethyltin hydride occurs primarily by attack on perester of triethyltin radicals, presumably at the –O-O- bond, even at 0.01 M tin hydride at 100 and 125°. Tin ester and tin ether are apparently formed in closely similar amounts under these conditions, but the tin ester predominates at room temperature in the companion air-induced decomposition, indicating that attack on perester to give the tin ether requires an activation energy approximately 5 kcal/mole in excess of that for the formation of tin ester.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis is divided into two parts. Part I generalizes a self-consistent calculation of residue shifts from SU3 symmetry, originally performed by Dashen, Dothan, Frautschi, and Sharp, to include the effects of non-linear terms. Residue factorizability is used to transform an overdetermined set of equations into a variational problem, which is designed to take advantage of the redundancy of the mathematical system. The solution of this problem automatically satisfies the requirement of factorizability and comes close to satisfying all the original equations.

Part II investigates some consequences of direct channel Regge poles and treats the problem of relating Reggeized partial wave expansions made in different reaction channels. An analytic method is introduced which can be used to determine the crossed-channel discontinuity for a large class of direct-channel Regge representations, and this method is applied to some specific representations.

It is demonstrated that the multi-sheeted analytic structure of the Regge trajectory function can be used to resolve apparent difficulties arising from infinitely rising Regge trajectories. Also discussed are the implications of large collections of "daughter trajectories."

Two things are of particular interest: first, the threshold behavior in direct and crossed channels; second, the potentialities of Reggeized representations for us in self-consistent calculations. A new representation is introduced which surpasses previous formulations in these two areas, automatically satisfying direct-channel threshold constraints while being capable of reproducing a reasonable crossed channel discontinuity. A scalar model is investigated for low energies, and a relation is obtained between the mass of the lowest bound state and the slope of the Regge trajectory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a collection of novel numerical techniques culminating in a fast, parallel method for the direct numerical simulation of incompressible viscous flows around surfaces immersed in unbounded fluid domains is presented. At the core of all these techniques is the use of the fundamental solutions, or lattice Green’s functions, of discrete operators to solve inhomogeneous elliptic difference equations arising in the discretization of the three-dimensional incompressible Navier-Stokes equations on unbounded regular grids. In addition to automatically enforcing the natural free-space boundary conditions, these new lattice Green’s function techniques facilitate the implementation of robust staggered-Cartesian-grid flow solvers with efficient nodal distributions and fast multipole methods. The provable conservation and stability properties of the appropriately combined discretization and solution techniques ensure robust numerical solutions. Numerical experiments on thin vortex rings, low-aspect-ratio flat plates, and spheres are used verify the accuracy, physical fidelity, and computational efficiency of the present formulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cancer chemotherapy has advanced from highly toxic drugs to more targeted treatments in the last 70 years. Chapter 1 opens with an introduction to targeted therapy for cancer. The benefits of using a nanoparticle to deliver therapeutics are discussed. We move on to siRNA in particular, and why it would be advantageous as a therapy. Specific to siRNA delivery are some challenges, such as nuclease degradation, quick clearance from circulation, needing to enter cells, and getting to the cytosol. We propose the development of a nanoparticle delivery system to tackle these challenges so that siRNA can be effective.

Chapter 2 of this thesis discusses the synthesis and analysis of a cationic mucic acid polymer (cMAP) which condenses siRNA to form a nanoparticle. Various methods to add polyethylene glycol (PEG) for stabilizing the nanoparticle in physiologic solutions, including using a boronic acid binding to diols on mucic acid, forming a copolymer of cMAP with PEG, and creating a triblock with mPEG on both ends of cMAP. The goal of these various pegylation strategies was to increase the circulation time of the siRNA nanoparticle in the bloodstream to allow more of the nanoparticle to reach tumor tissue by the enhanced permeation and retention effect. We found that the triblock mPEG-cMAP-PEGm polymer condensed siRNA to form very stable 30-40 nm particles that circulated for the longest time – almost 10% of the formulation remained in the bloodstream of mice 1 h after intravenous injection.

Chapter 3 explores the use of an antibody as a targeting agent for nanoparticles. Some antibodies of the IgG1 subtype are able to recruit natural killer cells that effect antibody dependent cellular cytotoxicity (ADCC) to kill the targeted cell to which the antibody is bound. There is evidence that the ADCC effect remains in antibody-drug conjugates, so we wanted to know whether the ADCC effect is preserved when the antibody is bound to a nanoparticle, which is a much larger and complex entity. We utilized antibodies against epidermal growth factor receptor with similar binding and pharmacokinetics, cetuximab and panitumumab, which differ in that cetuximab is an IgG1 and panitumumab is an IgG2 (which does not cause ADCC). Although a natural killer cell culture model showed that gold nanoparticles with a full antibody targeting agent can elicit target cell lysis, we found that this effect was not preserved in vivo. Whether this is due to the antibody not being accessible to immune cells or whether the natural killer cells are inactivated in a tumor xenograft remains unknown. It is possible that using a full antibody still has value if there are immune functions which are altered in a complex in vivo environment that are intact in an in vitro system, so the value of using a full antibody as a targeting agent versus using an antibody fragment or a protein such as transferrin is still open to further exploration.

In chapter 4, nanoparticle targeting and endosomal escape are further discussed with respect to the cMAP nanoparticle system. A diboronic acid entity, which gives an order of magnitude greater binding (than boronic acid) to cMAP due to the vicinal diols in mucic acid, was synthesized, attached to 5kD or 10kD PEG, and conjugated to either transferrin or cetuximab. A histidine was incorporated into the triblock polymer between cMAP and the PEG blocks to allow for siRNA endosomal escape. Nanoparticle size remained 30-40 nm with a slightly negative ca. -3 mV zeta potential with the triblock polymer containing histidine and when targeting agents were added. Greater mRNA knockdown was seen with the endosomal escape mechanism than without. The nanoparticle formulations were able to knock down the targeted mRNA in vitro. Mixed effects suggesting function were seen in vivo.

Chapter 5 summarizes the project and provides an outlook on siRNA delivery as well as targeted combination therapies for the future of personalized medicine in cancer treatment.