10 resultados para minimum message length

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The genomes of many positive stranded RNA viruses and of all retroviruses are translated as large polyproteins which are proteolytically processed by cellular and viral proteases. Viral proteases are structurally related to two families of cellular proteases, the pepsin-like and trypsin-like proteases. This thesis describes the proteolytic processing of several nonstructural proteins of dengue 2 virus, a representative member of the Flaviviridae, and describes methods for transcribing full-length genomic RNA of dengue 2 virus. Chapter 1 describes the in vitro processing of the nonstructural proteins NS2A, NS2B and NS3. Chapter 2 describes a system that allows identification of residues within the protease that are directly or indirectly involved with substrate recognition. Chapter 3 describes methods to produce genome length dengue 2 RNA from cDNA templates.

The nonstructural protein NS3 is structurally related to viral trypsinlike proteases from the alpha-, picorna-, poty-, and pestiviruses. The hypothesis that the flavivirus nonstructural protein NS3 is a viral proteinase that generates the termini of several nonstructural proteins was tested using an efficient in vitro expression system and antisera specific for the nonstructural proteins NS2B and NS3. A series of cDNA constructs was transcribed using T7 RNA polymerase and the RNA translated in reticulocyte lysates. Proteolytic processing occurred in vitro to generate NS2B and NS3. The amino termini of NS2B and NS3 produced in vitro were found to be the same as the termini of NS2B and NS3 isolated from infected cells. Deletion analysis of cDNA constructs localized the protease domain necessary and sufficient for correct cleavage to the first 184 amino acids of NS3. Kinetic analysis of processing events in vitro and experiments to examine the sensitivity of processing to dilution suggested that an intramolecular cleavage between NS2A and NS2B preceded an intramolecular cleavage between NS2B and NS3. The data from these expression experiments confirm that NS3 is the viral proteinase responsible for cleavage events generating the amino termini of NS2B and NS3 and presumably for cleavages generating the termini of NS4A and NS5 as well.

Biochemical and genetic experiments using viral proteinases have defined the sequence requirements for cleavage site recognition, but have not identified residues within proteinases that interact with substrates. A biochemical assay was developed that could identify residues which were important for substrate recognition. Chimeric proteases between yellow fever and dengue 2 were constructed that allowed mapping of regions involved in substrate recognition, and site directed mutagenesis was used to modulate processing efficiency.

Expression in vitro revealed that the dengue protease domain efficiently processes the yellow fever polyprotein between NS2A and NS2B and between NS2B and NS3, but that the reciprocal construct is inactive. The dengue protease processes yellow fever cleavage sites more efficiently than dengue cleavage sites, suggesting that suboptimal cleavage efficiency may be used to increase levels of processing intermediates in vivo. By mutagenizing the putative substrate binding pocket it was possible to change the substrate specificity of the yellow fever protease; changing a minimum of three amino acids in the yellow fever protease enabled it to recognize dengue cleavage sites. This system allows identification of residues which are directly or indirectly involved with enzyme-substrate interaction, does not require a crystal structure, and can define the substrate preferences of individual members of a viral proteinase family.

Full-length cDNA clones, from which infectious RNA can be transcribed, have been developed for a number of positive strand RNA viruses, including the flavivirus type virus, yellow fever. The technology necessary to transcribe genomic RNA of dengue 2 virus was developed in order to better understand the molecular biology of the dengue subgroup. A 5' structural region clone was engineered to transcribe authentic dengue RNA that contains an additional 1 or 2 residues at the 5' end. A 3' nonstructural region clone was engineered to allow production of run off transcripts, and to allow directional ligation with the 5' structural region clone. In vitro ligation and transcription produces full-length genomic RNA which is noninfectious when transfected into mammalian tissue culture cells. Alternative methods for constructing cDNA clones and recovering live dengue virus are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although numerous theoretical efforts have been put forth, a systematic, unified and predictive theoretical framework that is able to capture all the essential physics of the interfacial behaviors of ions, such as the Hofmeister series effect, Jones-Ray effect and the salt effect on the bubble coalescence remain an outstanding challenge. The most common approach to treating electrostatic interactions in the presence of salt ions is the Poisson-Boltzmann (PB) theory. However, there are many systems for which the PB theory fails to offer even a qualitative explanation of the behavior, especially for ions distributed in the vicinity of an interface with dielectric contrast between the two media (like the water-vapor/oil interface). A key factor missing in the PB theory is the self energy of the ion.

In this thesis, we develop a self-consistent theory that treats the electrostatic self energy (including both the short-range Born solvation energy and the long-range image charge interactions), the nonelectrostatic contribution of the self energy, the ion-ion correlation and the screening effect systematically in a single framework. By assuming a finite charge spread of the ion instead of using the point-charge model, the self energy obtained by our theory is free of the divergence problems and gives a continuous self energy across the interface. This continuous feature allows ions on the water side and the vapor/oil side of the interface to be treated in a unified framework. The theory involves a minimum set of parameters of the ion, such as the valency, radius, polarizability of the ions, and the dielectric constants of the medium, that are both intrinsic and readily available. The general theory is first applied to study the thermodynamic property of the bulk electrolyte solution, which shows good agreement with the experiment result for predicting the activity coefficient and osmotic coefficient.

Next, we address the effect of local Born solvation energy on the bulk thermodynamics and interfacial properties of electrolyte solution mixtures. We show that difference in the solvation energy between the cations and anions naturally gives rise to local charge separation near the interface, and a finite Galvani potential between two coexisting solutions. The miscibility of the mixture can either increases or decreases depending on the competition between the solvation energy and translation entropy of the ions. The interfacial tension shows a non-monotonic dependence on the salt concentration: it increases linearly with the salt concentration at higher concentrations, and decreases approximately as the square root of the salt concentration for dilute solutions, which is in agreement with the Jones-Ray effect observed in experiment.

Next, we investigate the image effects on the double layer structure and interfacial properties near a single charged plate. We show that the image charge repulsion creates a depletion boundary layer that cannot be captured by a regular perturbation approach. The correct weak-coupling theory must include the self-energy of the ion due to the image charge interaction. The image force qualitatively alters the double layer structure and properties, and gives rise to many non-PB effects, such as nonmonotonic dependence of the surface energy on concentration and charge inversion. The image charge effect is then studied for electrolyte solutions between two plates. For two neutral plates, we show that depletion of the salt ions by the image charge repulsion results in short-range attractive and long-range repulsive forces. If cations and anions are of different valency, the asymmetric depletion leads to the formation of an induced electrical double layer. For two charged plates, the competition between the surface charge and the image charge effect can give rise to like- charge attraction.

Then, we study the inhomogeneous screening effect near the dielectric interface due to the anisotropic and nonuniform ion distribution. We show that the double layer structure and interfacial properties is drastically affected by the inhomogeneous screening if the bulk Debye screening length is comparable or smaller than the Bjerrum length. The width of the depletion layer is characterized by the Bjerrum length, independent of the salt concentration. We predict that the negative adsorption of ions at the interface increases linearly with the salt concentration, which cannot be captured by either the bulk screening approximation or the WKB approximation. For asymmetric salt, the inhomogeneous screening enhances the charge separation in the induced double layer and significantly increases the value of the surface potential.

Finally, to account for the ion specificity, we study the self energy of a single ion across the dielectric interface. The ion is considered to be polarizable: its charge distribution can be self-adjusted to the local dielectric environment to minimize the self energy. Using intrinsic parameters of the ions, such as the valency, radius, and polarizability, we predict the specific ion effect on the interfacial affinity of halogen anions at the water/air interface, and the strong adsorption of hydrophobic ions at the water/oil interface, in agreement with experiments and atomistic simulations.

The theory developed in this work represents the most systematic theoretical technique for weak-coupling electrolytes. We expect the theory to be more useful for studying a wide range of structural and dynamic properties in physicochemical, colloidal, soft-matter and biophysical systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the behavior of granular materials at three length scales. At the smallest length scale, the grain-scale, we study inter-particle forces and "force chains". Inter-particle forces are the natural building blocks of constitutive laws for granular materials. Force chains are a key signature of the heterogeneity of granular systems. Despite their fundamental importance for calibrating grain-scale numerical models and elucidating constitutive laws, inter-particle forces have not been fully quantified in natural granular materials. We present a numerical force inference technique for determining inter-particle forces from experimental data and apply the technique to two-dimensional and three-dimensional systems under quasi-static and dynamic load. These experiments validate the technique and provide insight into the quasi-static and dynamic behavior of granular materials.

At a larger length scale, the mesoscale, we study the emergent frictional behavior of a collection of grains. Properties of granular materials at this intermediate scale are crucial inputs for macro-scale continuum models. We derive friction laws for granular materials at the mesoscale by applying averaging techniques to grain-scale quantities. These laws portray the nature of steady-state frictional strength as a competition between steady-state dilation and grain-scale dissipation rates. The laws also directly link the rate of dilation to the non-steady-state frictional strength.

At the macro-scale, we investigate continuum modeling techniques capable of simulating the distinct solid-like, liquid-like, and gas-like behaviors exhibited by granular materials in a single computational domain. We propose a Smoothed Particle Hydrodynamics (SPH) approach for granular materials with a viscoplastic constitutive law. The constitutive law uses a rate-dependent and dilation-dependent friction law. We provide a theoretical basis for a dilation-dependent friction law using similar analysis to that performed at the mesoscale. We provide several qualitative and quantitative validations of the technique and discuss ongoing work aiming to couple the granular flow with gas and fluid flows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem considered is that of minimizing the drag of a symmetric plate in infinite cavity flow under the constraints of fixed arclength and fixed chord. The flow is assumed to be steady, irrotational, and incompressible. The effects of gravity and viscosity are ignored.

Using complex variables, expressions for the drag, arclength, and chord, are derived in terms of two hodograph variables, Γ (the logarithm of the speed) and β (the flow angle), and two real parameters, a magnification factor and a parameter which determines how much of the plate is a free-streamline.

Two methods are employed for optimization:

(1) The parameter method. Γ and β are expanded in finite orthogonal series of N terms. Optimization is performed with respect to the N coefficients in these series and the magnification and free-streamline parameters. This method is carried out for the case N = 1 and minimum drag profiles and drag coefficients are found for all values of the ratio of arclength to chord.

(2) The variational method. A variational calculus method for minimizing integral functionals of a function and its finite Hilbert transform is introduced, This method is applied to functionals of quadratic form and a necessary condition for the existence of a minimum solution is derived. The variational method is applied to the minimum drag problem and a nonlinear integral equation is derived but not solved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a topology optimization methodology for the systematic design of optimal multifunctional silicon anode structures in lithium-ion batteries. In order to develop next generation high performance lithium-ion batteries, key design challenges relating to the silicon anode structure must be addressed, namely the lithiation-induced mechanical degradation and the low intrinsic electrical conductivity of silicon. As such, this work considers two design objectives of minimum compliance under design dependent volume expansion, and maximum electrical conduction through the structure, both of which are subject to a constraint on material volume. Density-based topology optimization methods are employed in conjunction with regularization techniques, a continuation scheme, and mathematical programming methods. The objectives are first considered individually, during which the iteration history, mesh independence, and influence of prescribed volume fraction and minimum length scale are investigated. The methodology is subsequently extended to a bi-objective formulation to simultaneously address both the compliance and conduction design criteria. A weighting method is used to derive the Pareto fronts, which demonstrate a clear trade-off between the competing design objectives. Furthermore, a systematic parameter study is undertaken to determine the influence of the prescribed volume fraction and minimum length scale on the optimal combined topologies. The developments presented in this work provide a foundation for the informed design and development of silicon anode structures for high performance lithium-ion batteries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proper encoding of transmitted information can improve the performance of a communication system. To recover the information at the receiver it is necessary to decode the received signal. For many codes the complexity and slowness of the decoder is so severe that the code is not feasible for practical use. This thesis considers the decoding problem for one such class of codes, the comma-free codes related to the first-order Reed-Muller codes.

A factorization of the code matrix is found which leads to a simple, fast, minimum memory, decoder. The decoder is modular and only n modules are needed to decode a code of length 2n. The relevant factorization is extended to any code defined by a sequence of Kronecker products.

The problem of monitoring the correct synchronization position is also considered. A general answer seems to depend upon more detailed knowledge of the structure of comma-free codes. However, a technique is presented which gives useful results in many specific cases.