987 resultados para soft-commutation techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O Brasil é um dos maiores consumidores per capita de açúcar e estudos têm mostrado um papel específico do consumo excessivo de açúcar no ganho de peso. Com o aumento do ganho de peso observado em vários países, e também no Brasil, é importante testar quais mensagens, estratégias e propostas de intervenção seriam eficazes na prevenção dessa epidemia. Os dados reportados são referentes a um ensaio randomizado por conglomerado, controlado, conduzido em 20 escolas municipais na cidade metropolitana de Niterói no Estado de Rio de Janeiro, de março a dezembro de 2007, que testou a eficácia de orientações para merendeiras objetivando reduzir a disponibilidade de açúcar e de alimentos fontes de açúcar na alimentação escolar e no consumo delas. A intervenção consistiu em um programa de educação nutricional nas escolas usando mensagens, atividades e material educativo que encorajassem a redução da adição de açúcar na alimentação escolar pelas merendeiras e no consumo delas. A redução da disponibilidade per capita de açúcar pelas escolas foi analisada através de planilhas com dados da utilização dos itens do estoque. O consumo individual das merendeiras foi avaliado através de questionário de freqüência de consumo alimentar. As medidas antropométricas e bioquímicas foram realizadas de acordo com técnicas padronizadas. As escolas de intervenção apresentaram maior redução da disponibilidade per capita de açúcar quando comparadas às escolas controle (-6,0 kg vs. 3,4 kg), mas sem diferença estatisticamente significante. Houve redução no consumo de doces e bebidas açucaradas nas merendeiras dos dois grupos, mas o consumo de açúcar não apresentou diferenças estatisticamente significativas entre eles. Houve redução do consumo de energia total nos dois grupos, mas sem diferença entre eles, e sem modificação dos percentuais de adequação dos macronutrientes em relação ao consumo de energia. Ao final do estudo somente as merendeiras do grupo de intervenção conseguiram manter a perda de peso, porém sem diferença estatisticamente significante. A estratégia de redução da disponibilidade e do consumo de açúcar por merendeiras de escolas públicas não atingiu o principal objetivo de redução de adição de açúcar. Uma análise secundária dos dados avaliou a associação entre a auto-percepção da saúde e da qualidade da alimentação com o excesso de peso e concentração elevada de colesterol sérico das merendeiras na linha de base. As perguntas de auto-percepção foram coletadas por entrevista. Dentre as que consideraram a sua alimentação como saudável, 40% apresentavam colesterol elevado e 61% apresentavam excesso de peso vs. 68% e 74%, respectivamente, para as que consideraram a sua alimentação como não-saudável. Dentre as que consideraram a sua saúde como boa, 41% apresentavam colesterol elevado e 59% apresentavam excesso de peso vs. 71% e 81%, respectivamente, para as que consideraram a sua saúde como ruim. A maioria das mulheres que relatou ter alimentação saudável apresentou maior frequência de consumo de frutas, verduras e legumes, feijão, leite e derivados e menor freqüência de consumo de refrigerante. Conclui-se que perguntas únicas e simples como as utilizadas para a auto-avaliação da saúde podem também ter importância na avaliação da alimentação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esse trabalho teve o objetivo de analisar a fidelidade da referência externa em tecidos moles no auxílio do posicionamento vertical da maxila. Foram selecionados 40 pacientes portadores de deformidade dentofacial e submetidos à osteotomia total da maxila. Os indivíduos foram divididos em 2 grupos no intuíto de avaliar duas técnicas de referência externa: a utilização da sutura em tecidos moles e o uso do fio de Kirschner. Esta última foi utilizada como a técnica do grupo-controle. Os dados foram colhidos em duas fases. Na primeira delas, foi realizada a mensuração da posição vertical da maxila antes da osteotomia Le Fort I e após a fixação da maxila, utilizando a referência externa. A partir desses números, foi obtida a alteração vertical de cada caso, colhida durante a cirurgia. Na segunda fase da coleta de dados, foram realizadas mensurações verticais da maxila baseadas nas radiografias cefalométricas pré e pós-operatórias. Assim, foi obtido o valor da alteração vertical de cada caso, baseado na documentação radiográfica. Após esta etapa, foi calculada a diferença entre a alteração vertical obtida durante a cirurgia e a alteração vertical colhida a partir das radiografias. Dessa forma, foram obtidos valores que correspondem às imperfeições no posicionamento vertical da maxila de cada paciente, tendo como base a posição do incisivo central superior. Os resultados foram comparados e analisados estatisticamente. A média aritmética da precisão no posicionamento vertical da maxila no grupo-controle foi de 0,52mm e do grupo da referência em tecidos moles foi de 0,65mm. A aplicação do teste t de Student a 5% revelou que não houve diferença estatística significativa entre o grau de precisão das duas técnicas de referência externa (P=0,429). Como conclusão, observou-se que as duas técnicas foram eficazes no auxílio ao posicionamento vertical da maxila e que a referência externa em tecidos moles apresentou um grau de precisão semelhante ao valor obtido com a técnica do fio de Kirschner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. The effect of 2,2’-bis-[α-(trimethylammonium)methyl]azobenzene (2BQ), a photoisomerizable competitive antagonist, was studied at the nicotinic acetycholine receptor of Electrophorus electroplaques using voltage-jump and light-flash techniques.

2. 2BQ, at concentrations below 3 μΜ, reduced the amplitude of voltage-jump relaxations but had little effect on the voltage-jump relaxation time constants under all experimental conditions. At higher concentrations and voltages more negative than -150 mV, 2BQ caused significant open channel blockade.

3. Dose-ratio studies showed that the cis and trans isomers of 2BQ have equilibrium binding constants (K) of .33 and 1.0 μΜ, respectively. The binding constants determined for both isomers are independent of temperature, voltage, agonist concentration, and the nature of the agonist.

4. In a solution of predominantly cis-2BQ, visible-light flashes led to a net cis→trans isomerization and caused an increase in the agonist-induced current. This increase had at least two exponential components; the larger amplitude component had the same time constant as a subsequent voltage-jump relaxation; the smaller amplitude component was investigated using ultraviolet light flashes.

5. In a solution of predominantly trans-2BQ, UV-light flashes led to a net trans→cis isomerization and caused a net decrease in the agonist-induced current. This effect had at least two exponential components. The smaller and faster component was an increase in agonist-induced current and had a similar time constant to the voltage-jump relaxation. The larger component was a slow decrease in the agonist-induced current with rate constant approximately an order of magnitude less than that of the voltage-jump relaxation. This slow component provided a measure of the rate constant for dissociation of cis-2BQ (k_ = 60/s at 20°C). Simple modelling of the slope of the dose-rate curves yields an association rate constant of 1.6 x 108/M/s. This agrees with the association rate constant of 1.8 x 108/M/s estimated from the binding constant (Ki). The Q10 of the dissociation rate constant of cis-2BQ was 3.3 between 6° and 20°C. The rate constants for association and dissociation of cis-28Q at receptors are independent of voltage, agonist concentration, and the nature of the agonist.

6. We have measured the molecular rate constants of a competitive antagonist which has roughly the same K as d-tubocurarine but interacts more slowly with the receptor. This leads to the conclusion that curare itself has an association rate constant of 4 x 109/M/s or roughly as fast as possible for an encounter-limited reaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In fresh waters the planktonic Crustacea are represented mainly by the two large groups, the Copepoda and the Cladocera. This study focuses on Holopedium gibberum and examines if the plankton is an indicator of soft-water lakes. H. gibberum is found throughout the northern half of the globe but its distribution is scattered and irregular. The study is based on a literature review and samples taken from water bodies in Norway.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.