7 resultados para Cost-Distance

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.

Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.

We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.

We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rate of electron transport between distant sites was studied. The rate depends crucially on the chemical details of the donor, acceptor, and surrounding medium. These reactions involve electron tunneling through the intervening medium and are, therefore, profoundly influenced by the geometry and energetics of the intervening molecules. The dependence of rate on distance was considered for several rigid donor-acceptor "linkers" of experimental importance. Interpretation of existing experiments and predictions for new experiments were made.

The electronic and nuclear motion in molecules is correlated. A Born-Oppenheimer separation is usually employed in quantum chemistry to separate this motion. Long distance electron transfer rate calculations require the total donor wave function when the electron is very far from its binding nuclei. The Born-Oppenheimer wave functions at large electronic distance are shown to be qualitatively wrong. A model which correctly treats the coupling was proposed. The distance and energy dependence of the electron transfer rate was determined for such a model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successful management has been defined as the art of spending money wisely and well. Profits may not be the end and all of business but they are certainly the test of practicality. Everything worth while should pay for itself. One proposal is no better than another, except as in the working-out it yields better results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of meso-phenyloctamethylporphyrins covalently bonded at the 4'phenyl position to quinones via rigid bicyclo[2.2.2]octane spacers were synthesized for the study of the dependence of electron transfer reaction rate on solvent, distance, temperature, and energy gap. A general and convergent synthesis was developed based on the condensation of ac-biladienes with masked quinonespacer-benzaldehydes. From picosecond fluorescence spectroscopy emission lifetimes were measured in seven solvents of varying polarity. Rate constants were determined to vary from 5.0x109sec-1 in N,N-dimethylformamide to 1.15x1010 Sec-1 in benzene, and were observed to rise at most by about a factor of three with decreasing solvent polarity. Experiments at low temperature in 2-MTHF glass (77K) revealed fast, nearly temperature-independent electron transfer characterized by non-exponential fluorescence decays, in contrast to monophasic behavior in fluid solution at 298K. This example evidently represents the first photosynthetic model system not based on proteins to display nearly temperature-independent electron transfer at high temperatures (nuclear tunneling). Low temperatures appear to freeze out the rotational motion of the chromophores, and the observed nonexponential fluorescence decays may be explained as a result of electron transfer from an ensemble of rotational conformations. The nonexponentiality demonstrates the sensitivity of the electron transfer rate to the precise magnitude of the electronic matrix element, which supports the expectation that electron transfer is nonadiabatic in this system. The addition of a second bicyclooctane moiety (15 Å vs. 18 Å edge-to-edge between porphyrin and quinone) reduces the transfer rate by at least a factor of 500-1500. Porphyrinquinones with variously substituted quinones allowed an examination of the dependence of the electron transfer rate constant κET on reaction driving force. The classical trend of increasing rate versus increasing exothermicity occurs from 0.7 eV≤ |ΔG0'(R)| ≤ 1.0 eV until a maximum is reached (κET = 3 x 108 sec-1 rising to 1.15 x 1010 sec-1 in acetonitrile). The rate remains insensitive to ΔG0 for ~ 300 mV from 1.0 eV≤ |ΔG0’(R)| ≤ 1.3 eV, and then slightly decreases in the most exothermic case studied (cyanoquinone, κET = 5 x 109 sec-1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I develop the velocity and structure models for the Los Angeles Basin and Southern Peru. The ultimate goal is to better understand the geological processes involved in the basin and subduction zone dynamics. The results are obtained from seismic interferometry using ambient noise and receiver functions using earthquake- generated waves. Some unusual signals specific to the local structures are also studied. The main findings are summarized as follows:

(1) Los Angeles Basin

The shear wave velocities range from 0.5 to 3.0 km/s in the sediments, with lateral gradients at the Newport-Inglewood, Compton-Los Alamitos, and Whittier Faults. The basin is a maximum of 8 km deep along the profile, and the Moho rises to a depth of 17 km under the basin. The basin has a stretch factor of 2.6 in the center decreasing to 1.3 at the edges, and is in approximate isostatic equilibrium. This "high-density" (~1 km spacing) "short-duration" (~1.5 month) experiment may serve as a prototype experiment that will allow basins to be covered by this type of low-cost survey.

(2) Peruvian subduction zone

Two prominent mid-crust structures are revealed in the 70 km thick crust under the Central Andes: a low-velocity zone interpreted as partially molten rocks beneath the Western Cordillera – Altiplano Plateau, and the underthrusting Brazilian Shield beneath the Eastern Cordillera. The low-velocity zone is oblique to the present trench, and possibly indicates the location of the volcanic arcs formed during the steepening of the Oligocene flat slab beneath the Altiplano Plateau.

The Nazca slab changes from normal dipping (~25 degrees) subduction in the southeast to flat subduction in the northwest of the study area. In the flat subduction regime, the slab subducts to ~100 km depth and then remains flat for ~300 km distance before it resumes a normal dipping geometry. The flat part closely follows the topography of the continental Moho above, indicating a strong suction force between the slab and the overriding plate. A high-velocity mantle wedge exists above the western half of the flat slab, which indicates the lack of melting and thus explains the cessation of the volcanism above. The velocity turns to normal values before the slab steepens again, indicating possible resumption of dehydration and ecologitization.

(3) Some unusual signals

Strong higher-mode Rayleigh waves due to the basin structure are observed in the periods less than 5 s. The particle motions provide a good test for distinguishing between the fundamental and higher mode. The precursor and coda waves relative to the interstation Rayleigh waves are observed, and modeled with a strong scatterer located in the active volcanic area in Southern Peru. In contrast with the usual receiver function analysis, multiples are extensively involved in this thesis. In the LA Basin, a good image is only from PpPs multiples, while in Peru, PpPp multiples contribute significantly to the final results.