974 resultados para sampling rate


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using an unperturbed scattering theory, the characteristics of H atom photoionization are studied respectively by a linearly- and by a circularly- polarized one-cycle laser pulse sequence. The asymmetry for photoelectrons in two directions opposite to each other is investigated. It is found that the asymmetry degree varies with the carrier-envelope (CE) phase, laser intensity, as well as the kinetic energy of photoelectrons. For the linear polarization, the maximal ionization rate varies with the CE phase, and the asymmetry degree varies with the CE phase in a sine-like pattern. For the circular polarization, the maximal ionization rate keeps constant for various CE phases, but the variation of asymmetry degree is still in a sine-like pattern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growth response, feed conversion ratio and cost benefits of hybrid catfish, Heterobranchus longifilis x Clarias gariepinus fed five maggot meal based diets were evaluated for 56 days in outdoor concrete tanks. Twenty-five fingerlings of the hybrid fish were stocked in ten outdoor concrete tanks of dimension 1.2mx0.13mx0.18m and code MM sub(1)-MM sub(5) in relation to their diet name. Five isonitrogenous and isocaloric maggot meal based diets namely MM sub(1)-0% maggot meal, MM sub(2)-25% maggot meal, MM sub(3)-50% maggot meal, MM sub(4-)75% maggot meal and MM sub(5-) 100% maggot meal were used for the experiment. The higher the proportion of maggot in the meal, the higher the ether extract and crude fiber. No significance difference P>0.05 exists between ash content of the experimental diets. Diet MM sub(2) had the best growth performance and highest MGR with a significant difference P<0.05 with other diets fed fish. No significance differences P>0.05 exists between the growth parameters for diets MM sub(1), MM sub(3), and MM sub(4). A positive correlation (r=1.0) exists (P<0.05, 0.25) between the growth parameters for the different experimental diets. Highest correlation r super(2)=0.9981 exists P<0.05 between MGR within the treatments. However, there no significant (P>0.05) difference in expenditure but there is between the profit indices and incidence of cost between the trials. MM sub(2) has the best yield cost and net profit. Without any reservation, inclusion of maggot based meal diet is recommended as feed of hybrid catfish to 75% inclusion for growth and profit incidence

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intramolecular electron transfer in partially reduced cytochrome c oxidase has been studied by means of perturbed equilibrium techniques. We have prepared a three electron reduced, CO inhibited form of the enzyme in which cytochrome a and copper A are partially reduced an in intramolecular redox equilibrium. When these samples were photolyzed using a nitrogen laser (0.6 µs, 1.0 mJ pulses) changes in absorbance at 598 nm and 830 nm were observed which are consistent with a fast electron from cytochrome a to copper A. The absorbance changes at 598 nm have an apparent rate of 17,200 ± 1,700 s^(-1) (1σ), at pH 7.0 and 25.5 °C. These changes were not observed in either the CO mixed valence or CO inhibited fully reduced forms of the enzyme. The rate is fastest at about pH 8.0, and falls off in either direction, and there is a small, but clear temperature dependence. The process was also observed in the cytochrome c -- cytochrome c oxidase high affinity complex.

This rate is far faster than any rate measured or inferred previously for the cytochrome a -- copper A electron equilibration, but the interpretation of these results is hampered by the fact that the relaxation could only be followed during the time before CO became rebound to the oxygen binding site. The meaning of our our measured rate is discussed, along with other reported rates for this process. In addition, a temperature-jump experiment on the same system is discussed.

We have also prepared a partially reduced, cyanide inhibited form of the enzyme in which cytochrome a, copper A and copper B are partially reduced and in redox equilibrium. Warming these samples produced absorbance changes at 605 nm which indicate that cytochrome a was becoming more oxidized, but there were no parallel changes in absorbance at 830 nm as would be expected if copper A was becoming reduced. We concluded that electrons were being redistributed from cytochrome a to copper B. The kinetics of the absorbance changes at 605 nm were investigated by temperature-jump methods. Although a rate could not be resolved, we concluded that the process must occur with an (apparent) rate larger than 10,000 s^(-1).

During the course of the temperature-jump experiments, we also found that non-redox related, temperature dependent absorbance changes in fully reduced CO inhibited cytochrome c oxidase, and in the cyanide mixed valence enzyme, took place with an (apparent) rate faster that 30,000 s^(-1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis I apply paleomagnetic techniques to paleoseismological problems. I investigate the use of secular-variation magnetostratigraphy to date prehistoric earthquakes; I identify liquefaction remanent magnetization (LRM), and I quantify coseismic deformation within a fault zone by measuring the rotation of paleomagnetic vectors.

In Chapter 2 I construct a secular-variation reference curve for southern California. For this curve I measure three new well-constrained paleomagnetic directions: two from the Pallett Creek paleoseismological site at A.D. 1397-1480 and A.D. 1465-1495, and one from Panum Crater at A.D. 1325-1365. To these three directions I add the best nine data points from the Sternberg secular-variation curve, five data points from Champion, and one point from the A.D. 1480 eruption of Mt. St. Helens. I derive the error due to the non-dipole field that is added to these data by the geographical correction to southern California. Combining these yields a secular variation curve for southern California covering the period A.D. 670 to 1910, with the best coverage in the range A.D. 1064 to 1505.

In Chapter 3 I apply this curve to a problem in southern California. Two paleoseismological sites in the Salton trough of southern California have sediments deposited by prehistoric Lake Cahuilla. At the Salt Creek site I sampled sediments from three different lakes, and at the Indio site I sampled sediments from four different lakes. Based upon the coinciding paleomagnetic directions I correlate the oldest lake sampled at Salt Creek with the oldest lake sampled at Indio. Furthermore, the penultimate lake at Indio does not appear to be present at Salt Creek. Using the secular variation curve I can assign the lakes at Salt Creek to broad age ranges of A.D. 800 to 1100, A.D. 1100 to 1300, and A.D. 1300 to 1500. This example demonstrates the large uncertainties in the secular variation curve and the need to construct curves from a limited geographical area.

Chapter 4 demonstrates that seismically induced liquefaction can cause resetting of detrital remanent magnetization and acquisition of a liquefaction remanent magnetization (LRM). I sampled three different liquefaction features, a sandbody formed in the Elsinore fault zone, diapirs from sediments of Mono Lake, and a sandblow in these same sediments. In every case the liquefaction features showed stable magnetization despite substantial physical disruption. In addition, in the case of the sandblow and the sandbody, the intensity of the natural remanent magnetization increased by up to an order of magnitude.

In Chapter 5 I apply paleomagnetics to measuring the tectonic rotations in a 52 meter long transect across the San Andreas fault zone at the Pallett Creek paleoseismological site. This site has presented a significant problem because the brittle long-term average slip-rate across the fault is significantly less than the slip-rate from other nearby sites. I find sections adjacent to the fault with tectonic rotations of up to 30°. If interpreted as block rotations, the non-brittle offset was 14.0+2.8, -2.1 meters in the last three earthquakes and 8.5+1.0, -0.9 meters in the last two. Combined with the brittle offset in these events, the last three events all had about 6 meters of total fault offset, even though the intervals between them were markedly different.

In Appendix 1 I present a detailed description of my standard sampling and demagnetization procedure.

In Appendix 2 I present a detailed discussion of the study at Panum Crater that yielded the well-constrained paleomagnetic direction for use in developing secular variation curve in Chapter 2. In addition, from sampling two distinctly different clast types in a block-and-ash flow deposit from Panum Crater, I find that this flow had a complex emplacement and cooling history. Angular, glassy "lithic" blocks were emplaced at temperatures above 600° C. Some of these had cooled nearly completely, whereas others had cooled only to 450° C, when settling in the flow rotated the blocks slightly. The partially cooled blocks then finished cooling without further settling. Highly vesicular, breadcrusted pumiceous clasts had not yet cooled to 600° C at the time of these rotations, because they show a stable, well clustered, unidirectional magnetic vector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.