943 resultados para Branch and bounds


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study owes its inception to the wisdom and experience of the staff of the Northeast Fisheries Science Center who, after several decades of surveys in the New York Bight, recognized a unique opportunity to capitalize on the decision to stop ocean dumping of sewage sludge and designed an innovative field study to evaluate effects on living marine resources and their habitats. For decades ocean dumping was viewed as a cheap and effective means for disposal of wastes generated by urbanized coastal areas. Even after the 12-mile site was closed, sewage sludge continued to be dumped at Deepwater Dumpsite 106. The 6-mile site off the NewJersey coast is still used as a dumpsite for dredged material from New York Harbor areas. Discussions continue on the propriety of using the deep ocean spaces for disposal of a variety of material including low level radioactive wastes. Consequently, managers are still faced with critical decisions in this area. It is to be hoped that the results from the 12-mile study will provide the necessary information on which these managers can evaluate future risks associated with ocean waste disposal. (PDF file contains 270 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distinct structures delineating the introns of Simian Virus 40 T-antigen and Adenovirus 2 E1A genes have been discovered. The structures, which are centered around the branch points of the genes inserted in supercoiled double-stranded plasmids, are specifically targeted through photoactivated strand cleavage by the metal complex tris(4,7-diphenyl-1,10-phenanthroline)rhodium(III). The DNA sites that are recognized lack sequence homology but are similar in demarcating functionally important sites on the RNA level. The single-stranded DNA fragments corresponding to the coding strands of the genes were also found to fold into a structure apparently identical to that in the supercoiled genes based on the recognition by the metal complex. Further investigation of different single-stranded DNA fragments with other structural probes, such as another metal complex bis(1,10-phenanthroline)(phenanthrenequinone diimine)rhodium(III), AMT (4'aminomethyl-4,5',8 trimethylpsoralen), restriction enzyme Mse I, and mung bean nuclease, showed that the structures require the sequ ences at both ends of the intron plus the flanking sequences but not the middle of the intron. The two ends form independent helices which interact with each other to form the global tertiary structures. Both of the intron structures share similarities to the structure of the Holliday junction, which is also known to be specifically targeted by the former metal complex. These structures may have arisen from early RNA intron structures and may have been used to facilitate the evolution of genes through exon shuffling by acting as target sites for recombinase enzymes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RNA interference (RNAi) is a powerful biological pathway allowing for sequence-specific knockdown of any gene of interest. While RNAi is a proven tool for probing gene function in biological circuits, it is limited by being constitutively ON and executes the logical operation: silence gene Y. To provide greater control over post-transcriptional gene silencing, we propose engineering a biological logic gate to implement “conditional RNAi.” Such a logic gate would silence gene Y only upon the expression of gene X, a completely unrelated gene, executing the logic: if gene X is transcribed, silence independent gene Y. Silencing of gene Y could be confined to a specific time and/or tissue by appropriately selecting gene X.

To implement the logic of conditional RNAi, we present the design and experimental validation of three nucleic acid self-assembly mechanisms which detect a sub-sequence of mRNA X and produce a Dicer substrate specific to gene Y. We introduce small conditional RNAs (scRNAs) to execute the signal transduction under isothermal conditions. scRNAs are small RNAs which change conformation, leading to both shape and sequence signal transduction, in response to hybridization to an input nucleic acid target. While all three conditional RNAi mechanisms execute the same logical operation, they explore various design alternatives for nucleic acid self-assembly pathways, including the use of duplex and monomer scRNAs, stable versus metastable reactants, multiple methods of nucleation, and 3-way and 4-way branch migration.

We demonstrate the isothermal execution of the conditional RNAi mechanisms in a test tube with recombinant Dicer. These mechanisms execute the logic: if mRNA X is detected, produce a Dicer substrate targeting independent mRNA Y. Only the final Dicer substrate, not the scRNA reactants or intermediates, is efficiently processed by Dicer. Additional work in human whole-cell extracts and a model tissue-culture system delves into both the promise and challenge of implementing conditional RNAi in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the discovery of D-branes as non-perturbative, dynamic objects in string theory, various configurations of branes in type IIA/B string theory and M-theory have been considered to study their low-energy dynamics described by supersymmetric quantum field theories.

One example of such a construction is based on the description of Seiberg-Witten curves of four-dimensional N = 2 supersymmetric gauge theories as branes in type IIA string theory and M-theory. This enables us to study the gauge theories in strongly-coupled regimes. Spectral networks are another tool for utilizing branes to study non-perturbative regimes of two- and four-dimensional supersymmetric theories. Using spectral networks of a Seiberg-Witten theory we can find its BPS spectrum, which is protected from quantum corrections by supersymmetry, and also the BPS spectrum of a related two-dimensional N = (2,2) theory whose (twisted) superpotential is determined by the Seiberg-Witten curve. When we don’t know the perturbative description of such a theory, its spectrum obtained via spectral networks is a useful piece of information. In this thesis we illustrate these ideas with examples of the use of Seiberg-Witten curves and spectral networks to understand various two- and four-dimensional supersymmetric theories.

First, we examine how the geometry of a Seiberg-Witten curve serves as a useful tool for identifying various limits of the parameters of the Seiberg-Witten theory, including Argyres-Seiberg duality and Argyres-Douglas fixed points. Next, we consider the low-energy limit of a two-dimensional N = (2, 2) supersymmetric theory from an M-theory brane configuration whose (twisted) superpotential is determined by the geometry of the branes. We show that, when the two-dimensional theory flows to its infra-red fixed point, particular cases realize Kazama-Suzuki coset models. We also study the BPS spectrum of an Argyres-Douglas type superconformal field theory on the Coulomb branch by using its spectral networks. We provide strong evidence of the equivalence of superconformal field theories from different string-theoretic constructions by comparing their BPS spectra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.

If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.

The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.

If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM) by the dual norm. The projective bounds of a norm and its dual are equal.

The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.

In all results, the real and complex cases are handled in a completely parallel fashion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.

The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.

The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Los Angeles Harbor at San Pedro with its natural advantages, and the big development of these now underway, will very soon be the key to the traffic routes of Southern California. The Atchison, Topeka, and Santa Fe railway company realizing this and, not wishing to be caught asleep, has planned to build a line from El Segundo to the harbor. The developments of the harbor are not the only developments taking place in these localities and the proposed new line is intended to include these also.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Chapter 1.....A physicochemical study of the DNA molecules from the three bacteriophages, N1, N5, and N6, which infect the bacterium, M. lysodeikticus, has been made. The molecular weights, as measured by both electron microscopy and sedimentation velocity, are 23 x 106 for N5 DNA and 31 x 106 for N1 and N6 DNA's. All three DNA's are capable of thermally reversible cyclization. N1 and N6 DNA's have identical or very similar base sequences as judged by membrane filter hybridization and by electron microscope heteroduplex studies. They have identical or similar cohesive ends. These results are in accord with the close biological relation between N1 and N6 phages. N5 DNA is not closely related to N1 or N6 DNA. The denaturation Tm of all three DNA's is the same and corresponds to a (GC) content of 70%. However, the buoyant densities in CsCl of Nl and N6 DNA's are lower than expected, corresponding to predicted GC contents of 64 and 67%. The buoyant densities in Cs2SO4 are also somewhat anomalous. The buoyant density anomalies are probably due to the presence of odd bases. However, direct base composition analysis of N1 DNA by anion exchange chromatography confirms a GC content of 70%, and, in the elution system used, no peaks due to odd bases are present.

Chapter 2.....A covalently closed circular DNA form has been observed as an intracellular form during both productive and abortive infection processes in M. lysodeikticus. This species has been isolated by the method of CsC1-ethidium bromide centrifugation and examined with an electron microscope.

Chapter 3.....A minute circular DNA has been discovered as a homogeneous population in M. lysodeikticus. Its length and molecular weight as determined by electron microscopy are 0.445 μ and 0.88 x 106 daltons respectively. There is about one minicircle per bacterium.

Chapter 4.....Several strains of E. coli 15 harbor a prophage. Viral growth can be induced by exposing the host to mitomycin C or to uv irradiation. The coliphage 15 particles from E. coli 15 and E, coli 15 T- appear as normal phage with head and tail structure; the particles from E. coli 15 TAU are tailless. The complete particles exert a colicinogenic activity on E.coli 15 and 15 T-, the tailless particles do not. No host for a productive viral infection has been found and the phage may be defective. The properties of the DNA of the virus have been studied, mainly by electron microscopy. After induction but before lysis, a closed circular DNA with a contour length of about 11.9 μ is found in the bacterium; the mature phage DNA is a linear duplex and 7.5% longer than the intracellular circular form. This suggests the hypothesis that the mature phage DNA is terminally repetitious and circularly permuted. The hypothesis was confirmed by observing that denaturation and renaturation of the mature phage DNA produce circular duplexes with two single-stranded branches corresponding to the terminal repetition. The contour length of the mature phage DNA was measured relative to φX RFII DNA and λ DNA; the calculated molecular weight is 27 x 106. The length of the single-stranded terminal repetition was compared to the length of φX 174 DNA under conditions where single-stranded DNA is seen in an extended form in electron micrographs. The length of the terminal repetition is found to be 7.4% of the length of the nonrepetitious part of the coliphage 15 DNA. The number of base pairs in the terminal repetition is variable in different molecules, with a fractional standard deviation of 0.18 of the average number in the terminal repetition. A new phenomenon termed "branch migration" has been discovered in renatured circular molecules; it results in forked branches, with two emerging single strands, at the position of the terminal repetition. The distribution of branch separations between the two terminal repetitions in the population of renatured circular molecules was studied. The observed distribution suggests that there is an excluded volume effect in the renaturation of a population of circularly permuted molecules such that strands with close beginning points preferentially renature with each other. This selective renaturation and the phenomenon of branch migration both affect the distribution of branch separations; the observed distribution does not contradict the hypothesis of a random distribution of beginning points around the chromosome.

Chapter 5....Some physicochemical studies on the minicircular DNA species in E. coli 15 (0.670 μ, 1.47 x 106 daltons) have been made. Electron microscopic observations showed multimeric forms of the minicircle which amount to 5% of total DNA species and also showed presumably replicating forms of the minicircle. A renaturation kinetic study showed that the minicircle is a unique DNA species in its size and base sequence. A study on the minicircle replication has been made under condition in which host DNA synthesis is synchronized. Despite experimental uncertainties involved, it seems that the minicircle replication is random and the number of the minicircles increases continuously throughout a generation of the host, regardless of host DNA synchronization.

Part II

The flow dichroism of dilute DNA solutions (A260≈0.1) has been studied in a Couette-type apparatus with the outer cylinder rotating and with the light path parallel to the cylinder axis. Shear gradients in the range of 5-160 sec.-1 were studied. The DNA samples were whole, "half," and "quarter" molecules of T4 bacteriophage DNA, and linear and circular λb2b5c DNA. For the linear molecules, the fractional flow dichroism is a linear function of molecular weight. The dichroism for linear A DNA is about 1.8 that of the circular molecule. For a given DNA, the dichroism is an approximately linear function of shear gradient, but with a slight upward curvature at low values of G, and some trend toward saturation at larger values of G. The fractional dichroism increases as the supporting electrolyte concentration decreases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.