21 resultados para Frames per second

em CaltechTHESIS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part I.

In recent years, backscattering spectrometry has become an important tool for the analysis of thin films. An inherent limitation, though, is the loss of depth resolution due to energy straggling of the beam. To investigate this, energy straggling of 4He has been measured in thin films of Ni, Al, Au and Pt. Straggling is roughly proportional to square root of thickness, appears to have a slight energy dependence and generally decreases with decreasing atomic number of the adsorber. The results are compared with predictions of theory and with previous measurements. While Ni measurements are in fair agreement with Bohr's theory, Al measurements are 30% above and Au measurements are 40% below predicted values. The Au and Pt measurements give straggling values which are close to one another.

Part II.

MeV backscattering spectrometry and X-ray diffraction are used to investigate the behavior of sputter-deposited Ti-W mixed films on Si substrates. During vacuum anneals at temperatures near 700°C for several hours, the metallization layer reacts with the substrate. Backscattering analysis shows that the resulting compound layer is uniform in composition and contains Ti, Wand Si. The Ti:W ratio in the compound corresponds to that of the deposited metal film. X-ray analyses with Reed and Guinier cameras reveal the presence of the ternary TixW(1-x)Si2 compound. Its composition is unaffected by oxygen contamination during annealing, but the reaction rate is affected. The rate measured on samples with about 15% oxygen contamination after annealing is linear, of the order of 0.5 Å per second at 725°C, and depends on the crystallographic orientation of the substrate and the dc bias during sputter-deposition of the Ti-W film.

Au layers of about 1000 Å thickness were deposited onto unreacted Ti-W films on Si. When annealed at 400°C these samples underwent a color change,and SEM micrographs of the samples showed that an intricate pattern of fissures which were typically 3µm wide had evolved. Analysis by electron microprobe revealed that Au had segregated preferentially into the fissures. This result suggests that Ti-W is not a barrier to Au-Si intermixing at 400°C.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The influence upon the basic viscous flow about two axisymmetric bodies of (i) freestream turbulence level and (ii) the injection of small amounts of a drag-reducing polymer (Polyox WSR 301) into the test model boundary layer was investigated by the schlieren flow visualization technique. The changes in the type and occurrence of cavitation inception caused by the subsequent modifications in the viscous flow were studied. A nuclei counter using the holographic technique was built to monitor freestream nuclei populations and a few preliminary tests investigating the consequences of different populations on cavitation inception were carried out.

Both test models were observed to have a laminar separation over their respective test Reynolds number ranges. The separation on one test model was found to be insensitive to freestream turbulence levels of up to 3.75 percent. The second model was found to be very susceptible having its critical velocity reduced from 30 feet per second at a 0.04 percent turbulence level to 10 feet per second at a 3.75 percent turbulence level. Cavitation tests on both models at the lowest turbulence level showed the value of the incipient cavitation number and the type of cavitation were controlled by the presence of the laminar separation. Cavitation tests on the second model at 0.65 percent turbulence level showed no change in the inception index, but the appearance of the developed cavitation was altered.

The presence of Polyox in the boundary layer resulted in a cavitation suppression comparable to that found by other investigators. The elimination of the normally occurring laminar separation on these bodies by a polymer-induced instability in the laminar boundary layer was found to be responsible for the suppression of inception.

Freestream nuclei populations at test conditions were measured and it was found that if there were many freestream gas bubbles the normally present laminar separation was elminated and travelling bubble type cavitation occurred - the value of the inception index then depended upon the nuclei population. In cases where the laminar separation was present it was found that the value of the inception index was insensitive to the free stream nuclei populations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report presents the results of an investigation of a method of underwater propulsion. The propelling system utilizes the energy of a small mass of expanding gas to accelerate the flow of a large mass of water through an open ended duct of proper shape and dimensions to obtain a resultant thrust. The investigation was limited to making a large number of runs on a hydroduct of arbitrary design, varying between wide limits the water flow and gas flow through the device, and measuring the net thrust caused by the introduction and expansion of the gas.

In comparison with the effective exhaust velocity of about 6,000 feet per second observed in rocket motors, this hydroduct model attained a maximum effective exhaust velocity of more than 27,000 feet per second, using nitrogen gas. Using hydrogen gas, effective exhaust velocities of 146,000 feet per second were obtained. Further investigation should prove this method of propulsion not only to be practical but very efficient.

This investigation was conducted at Project No. 1, Guggenheim Aeronautical Laboratory, California Institute of Technology, Pasadena, California.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, the discovery of bulk metallic glasses with exceptional properties has generated much interest. One of their most intriguing features is their capacity for viscous flow above the glass transition temperature. This characteristic allows metallic glasses to be formed like plastics at modest temperatures. However, crystallization of supercooled metallic liquids in the best bulk metallic glass-formers is much more rapid than in most polymers and silicate glass-forming liquids. The short times to crystallization impairs experimentation on and processing of supercooled glass-forming metallic liquids. A technique to rapidly and uniformly heat metallic glasses at rates of 105 to 106 kelvin per second is presented. A capacitive discharge is used to ohmically heat metallic glasses to temperatures in the super cooled liquid region in millisecond time-scales. By heating samples rapidly, the most time-consuming step in experiments on supercooled metallic liquids is reduced orders of magnitude in length. This allows for experimentation on and processing of metallic liquids in temperature ranges that were previously inaccessible because of crystallization.

A variety of forming techniques, including injection molding and forging, were coupled with capacitive discharge heating to produce near net-shaped metallic glass parts. In addition, a new forming technique, which combines a magnetic field with the heating current to produce a forming force, was developed. Viscosities were measured in previously inaccessible temperature ranges using parallel plate rheometry combined with capacitive discharge heating. Lastly, a rapid pulse calorimeter was developed with this technique to investigate the thermophysical behavior of metallic glasses at these rapid heating rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

A study of the thermal reaction of water vapor and parts-per-million concentrations of nitrogen dioxide was carried out at ambient temperature and at atmospheric pressure. Nitric oxide and nitric acid vapor were the principal products. The initial rate of disappearance of nitrogen dioxide was first order with respect to water vapor and second order with respect to nitrogen dioxide. An initial third-order rate constant of 5.5 (± 0.29) x 104 liter2 mole-2 sec-1 was found at 25˚C. The rate of reaction decreased with increasing temperature. In the temperature range of 25˚C to 50˚C, an activation energy of -978 (± 20) calories was found.

The reaction did not go to completion. From measurements as the reaction approached equilibrium, the free energy of nitric acid vapor was calculated. This value was -18.58 (± 0.04) kilocalories at 25˚C.

The initial rate of reaction was unaffected by the presence of oxygen and was retarded by the presence of nitric oxide. There were no appreciable effects due to the surface of the reactor. Nitric oxide and nitrogen dioxide were monitored by gas chromatography during the reaction.

Part II

The air oxidation of nitric oxide, and the oxidation of nitric oxide in the presence of water vapor, were studied in a glass reactor at ambient temperatures and at atmospheric pressure. The concentration of nitric oxide was less than 100 parts-per-million. The concentration of nitrogen dioxide was monitored by gas chromatography during the reaction.

For the dry oxidation, the third-order rate constant was 1.46 (± 0.03) x 104 liter2 mole-2 sec-1 at 25˚C. The activation energy, obtained from measurements between 25˚C and 50˚C, was -1.197 (±0.02) kilocalories.

The presence of water vapor during the oxidation caused the formation of nitrous acid vapor when nitric oxide, nitrogen dioxide and water vapor combined. By measuring the difference between the concentrations of nitrogen dioxide during the wet and dry oxidations, the rate of formation of nitrous acid vapor was found. The third-order rate constant for the formation of nitrous acid vapor was equal to 1.5 (± 0.5) x 105 liter2 mole-2 sec-1 at 40˚C. The reaction rate did not change measurably when the temperature was increased to 50˚C. The formation of nitric acid vapor was prevented by keeping the concentration of nitrogen dioxide low.

Surface effects were appreciable for the wet tests. Below 35˚C, the rate of appearance of nitrogen dioxide increased with increasing surface. Above 40˚C, the effect of surface was small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the branches of astronomy, radio astronomy is unique in that it spans the largest portion of the electromagnetic spectrum, e.g., from about 10 MHz to 300 GHz. On the other hand, due to scientific priorities as well as technological limitations, radio astronomy receivers have traditionally covered only about an octave bandwidth. This approach of "one specialized receiver for one primary science goal" is, however, not only becoming too expensive for next-generation radio telescopes comprising thousands of small antennas, but also is inadequate to answer some of the scientific questions of today which require simultaneous coverage of very large bandwidths.

This thesis presents significant improvements on the state of the art of two key receiver components in pursuit of decade-bandwidth radio astronomy: 1) reflector feed antennas; 2) low-noise amplifiers on compound-semiconductor technologies. The first part of this thesis introduces the quadruple-ridged flared horn, a flexible, dual linear-polarization reflector feed antenna that achieves 5:1-7:1 frequency bandwidths while maintaining near-constant beamwidth. The horn is unique in that it is the only wideband feed antenna suitable for radio astronomy that: 1) can be designed to have nominal 10 dB beamwidth between 30 and 150 degrees; 2) requires one single-ended 50 Ohm low-noise amplifier per polarization. Design, analysis, and measurements of several quad-ridged horns are presented to demonstrate its feasibility and flexibility.

The second part of the thesis focuses on modeling and measurements of discrete high-electron mobility transistors (HEMTs) and their applications in wideband, extremely low-noise amplifiers. The transistors and microwave monolithic integrated circuit low-noise amplifiers described herein have been fabricated on two state-of-the-art HEMT processes: 1) 35 nm indium phosphide; 2) 70 nm gallium arsenide. DC and microwave performance of transistors from both processes at room and cryogenic temperatures are included, as well as first-reported measurements of detailed noise characterization of the sub-micron HEMTs at both temperatures. Design and measurements of two low-noise amplifiers covering 1--20 and 8—50 GHz fabricated on both processes are also provided, which show that the 1--20 GHz amplifier improves the state of the art in cryogenic noise and bandwidth, while the 8--50 GHz amplifier achieves noise performance only slightly worse than the best published results but does so with nearly a decade bandwidth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.

Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Because so little is known about the structure of membrane proteins, an attempt has been made in this work to develop techniques by which to model them in three dimensions. The procedures devised rely heavily upon the availability of several sequences of a given protein. The modelling procedure is composed of two parts. The first identifies transmembrane regions within the protein sequence on the basis of hydrophobicity, β-turn potential, and the presence of certain amino acid types, specifically, proline and basic residues. The second part of the procedure arranges these transmembrane helices within the bilayer based upon the evolutionary conservation of their residues. Conserved residues are oriented toward other helices and variable residues are positioned to face the surrounding lipids. Available structural information concerning the protein's helical arrangement, including the lengths of interhelical loops, is also taken into account. Rhodopsin, band 3, and the nicotinic acetylcholine receptor have all been modelled using this methodology, and mechanisms of action could be proposed based upon the resulting structures.

Specific residues in the rhodopsin and iodopsin sequences were identified, which may regulate the proteins' wavelength selectivities. A hinge-like motion of helices M3, M4, and M5 with respect to the rest of the protein was proposed to result in the activation of transducin, the G-protein associated with rhodopsin. A similar mechanism is also proposed for signal transduction by the muscarinic acetylcholine and β-adrenergic receptors.

The nicotinic acetylcholine receptor was modelled with four trans-membrane helices per subunit and with the five homologous M2 helices forming the cation channel. Putative channel-lining residues were identified and a mechanism of channel-opening based upon the concerted, tangential rotation of the M2 helices was proposed.

Band 3, the anion exchange protein found in the erythrocyte membrane, was modelled with 14 transmembrane helices. In general the pathway of anion transport can be viewed as a channel composed of six helices that contains a single hydrophobic restriction. This hydrophobic region will not allow the passage of charged species, unless they are part of an ion-pair. An arginine residue located near this restriction is proposed to be responsible for anion transport. When ion-paired with a transportable anion it rotates across the barrier and releases the anion on the other side of the membrane. A similar process returns it to its original position. This proposed mechanism, based on the three-dimensional model, can account for the passive, electroneutral, anion exchange observed for band 3. Dianions can be transported through a similar mechanism with the additional participation of a histidine residue. Both residues are located on M10.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of three parts. Chapter 2 deals with the dynamic buckling behavior of steel braces under cyclic axial end displacement. Braces under such a loading condition belong to a class of "acceleration magnifying" structural components, in which a small motion at the loading points can cause large internal acceleration and inertia. This member-level inertia is frequently ignored in current studies of braces and braced structures. This chapter shows that, under certain conditions, the inclusion of the member-level inertia can lead to brace behavior fundamentally different from that predicted by the quasi-static method. This result is to have significance in the correct use of the quasi-static, pseudo-dynamic and static condensation methods in the simulation of braces or braced structures under dynamic loading. The strain magnitude and distribution in the braces are also studied in this chapter.

Chapter 3 examines the effect of column uplift on the earthquake response of braced steel frames and explores the feasibility of flexible column-base anchoring. It is found that fully anchored braced-bay columns can induce extremely large internal forces in the braced-bay members and their connections, thus increasing the risk of failures observed in recent earthquakes. Flexible braced-bay column anchoring can significantly reduce the braced bay member force, but at the same time also introduces large story drift and column uplift. The pounding of an uplifting column with its support can result in very high compressive axial force.

Chapter 4 conducts a comparative study on the effectiveness of a proposed non-buckling bracing system and several conventional bracing systems. The non-buckling bracing system eliminates buckling and thus can be composed of small individual braces distributed widely in a structure to reduce bracing force concentration and increase redundancy. The elimination of buckling results in a significantly more effective bracing system compared with the conventional bracing systems. Among the conventional bracing systems, bracing configurations and end conditions for the bracing members affect the effectiveness.

The studies in Chapter 3 and Chapter 4 also indicate that code-designed conventionally braced steel frames can experience unacceptably severe response under the strong ground motions recorded during the recent Northridge and Kobe earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chapter I

Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.

Chapter II

A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.

EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.

EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.

Chapter III

A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A description is given of experimental work on the damping of a second order electron plasma wave echo due to velocity space diffusion in a low temperature magnetoplasma. Sufficient precision was obtained to verify the theoretically predicted cubic rather than quadratic or quartic dependence of the damping on exciter separation. Compared to the damping predicted for Coulomb collisions in a thermal plasma in an infinite magnetic field, the magnitude of the damping was approximately as predicted, while the velocity dependence of the damping was weaker than predicted. The discrepancy is consistent with the actual non-Maxwellian electron distribution of the plasma.

In conjunction with the damping work, echo amplitude saturation was measured as a function of the velocity of the electrons contributing to the echo. Good agreement was obtained with the predicted J1 Bessel function amplitude dependence, as well as a demonstration that saturation did not influence the damping results.