15 resultados para Requirements elicitation techniques
em CaltechTHESIS
Resumo:
β-lactamases are a group of enzymes that confer resistance to penam and cephem antibiotics by hydrolysis of the β-lactam ring, thereby inactivating the antibiotic. Crystallographic and computer modeling studies of RTEM-1 β-lactamase have indicated that Asp 132, a strictly conserved residue among the class A β-lactamases, appears to be involved in substrate binding, catalysis, or both. To study the contribution of residue 132 to β-lactamase function, site saturation mutagenesis was used to generate mutants coding for all 20 amino acids at position 132. Phenotypic screening of all mutants indicated that position 132 is very sensitive to amino acid changes, with only N132C, N132D, N132E, and N132Q showing any appreciable activity. Kinetic analysis of three of these mutants showed increases in K_M, along with substantial decreases in k_(cat). Efforts to trap a stable acyl-enzyme intermediate were unsuccessfuL These results indicate that residue 132 is involved in substrate binding, as well as catalysis, and supports the involvement of this residue in acylation as suggested by Strynadka et al.
Crystallographic and computer modeling studies of RTEM-1 β-lactamase have indicated that Lys 73 and Glu 166, two strictly conserved residues among the class A β-lactamases, appear to be involved in substrate binding, catalysis, or both. To study the contribution of these residues to β-lactamase function, site saturation mutagenesis was used to generate mutants coding for all 20 amino acids at positions 73 and 166. Then all 400 possible combinations of mutants were created by combinatorial mutagenesis. The colonies harboring the mutants were screened for growth in the presence of ampicillin. The competent colonys' DNA were sequenced, and kinetic parameters investigated. It was found that lysine is essential at position 73, and that position 166 only tolerated fairly conservative changes (Aspartic acid, Histidine, and Tyrosine). These functional mutants exhibited decreased kcat's, but K_M was close to wild-type levels. The results of the combinatorial mutagenesis experiments indicate that Lysis absolutely required for activity at position 73; no mutation at residue 166 can compensate for loss of the long side chain amine. The active mutants found--K73K/E166D, K73KIE166H, and K73KIE166Y were studied by kinetic analysis. These results reaffirmed the function of residue 166 as important in catalysis, specifically deacylation.
The identity of the residue responsible for enhancing the active site serine (Ser 70) in RTEM-1 β-lactamase has been disputed for some time. Recently, analysis of a crystal structure of RTEM-1 β-lactamase with covalently bound intermediate was published, and it was suggested that Lys 73, a strictly conserved residue among the class A β-lactamases, was acting as a general base, activating Ser 70. For this to be possible, the pK_a of Lys 73 would have to be depressed significantly. In an attempt to assay the pK_a of Lys 73, the mutation K73C was made. This mutant protein can be reacted with 2-bromoethylamine, and activity is restored to near wild type levels. ^(15)N-2-bromoethylamine hydrobromide and ^(13)C-2-bromoethylamine hydrobromide were synthesized. Reacting these compounds with the K73C mutant gives stable isotopic enrichment at residue 73 in the form of aminoethylcysteine, a lysine homologue. The pK_a of an amine can be determined by NMR titration, following the change in chemical shift of either the ^(15)N-amine nuclei or adjacent Be nuclei as pH is changed. Unfortunately, low protein solubility, along with probable label scrambling in the Be experiment, did not permit direct observation of either the ^(15)N or ^(13)C signals. Indirect detection experiments were used to observe the protons bonded directly to the ^(13)C atoms. Two NMR signals were seen, and their chemical shift change with pH variation was noted. The peak which was determined to correspond to the aminoethylcysteine residue shifted from 3.2 ppm down to 2.8 ppm over a pH range of 6.6 to 12.5. The pK_a of the amine at position 73 was determined to be ~10. This indicates that residue 73 does not function as a general base in the acylation step of the reaction. However the experimental measurement takes place in the absence of substrate. Since the enzyme undergoes conformational changes upon substrate binding, the measured pK_a of the free enzyme may not correspond to the pK_a of the enzyme substrate complex.
Resumo:
Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.
Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.
An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.
As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.
Resumo:
A series of eight related analogs of distamycin A has been synthesized. Footprinting and affinity cleaving reveal that only two of the analogs, pyridine-2- car box amide-netropsin (2-Py N) and 1-methylimidazole-2-carboxamide-netrops in (2-ImN), bind to DNA with a specificity different from that of the parent compound. A new class of sites, represented by a TGACT sequence, is a strong site for 2-PyN binding, and the major recognition site for 2-ImN on DNA. Both compounds recognize the G•C bp specifically, although A's and T's in the site may be interchanged without penalty. Additional A•T bp outside the binding site increase the binding affinity. The compounds bind in the minor groove of the DNA sequence, but protect both grooves from dimethylsulfate. The binding evidence suggests that 2-PyN or 2-ImN binding induces a DNA conformational change.
In order to understand this sequence specific complexation better, the Ackers quantitative footprinting method for measuring individual site affinity constants has been extended to small molecules. MPE•Fe(II) cleavage reactions over a 10^5 range of free ligand concentrations are analyzed by gel electrophoresis. The decrease in cleavage is calculated by densitometry of a gel autoradiogram. The apparent fraction of DNA bound is then calculated from the amount of cleavage protection. The data is fitted to a theoretical curve using non-linear least squares techniques. Affinity constants at four individual sites are determined simultaneously. The distamycin A analog binds solely at A•T rich sites. Affinities range from 10^(6)- 10^(7)M^(-1) The data for parent compound D fit closely to a monomeric binding curve. 2-PyN binds both A•T sites and the TGTCA site with an apparent affinity constant of 10^(5) M^(-1). 2-ImN binds A•T sites with affinities less than 5 x 10^(4) M^(-1). The affinity of 2-ImN for the TGTCA site does not change significantly from the 2-PyN value. At the TGTCA site, the experimental data fit a dimeric binding curve better than a monomeric curve. Both 2-PyN and 2-ImN have substantially lower DNA affinities than closely related compounds.
In order to probe the requirements of this new binding site, fourteen other derivatives have been synthesized and tested. All compounds that recognize the TGTCA site have a heterocyclic aromatic nitrogen ortho to the N or C-terminal amide of the netropsin subunit. Specificity is strongly affected by the overall length of the small molecule. Only compounds that consist of at least three aromatic rings linked by amides exhibit TGTCA site binding. Specificity is only weakly altered by substitution on the pyridine ring, which correlates best with steric factors. A model is proposed for TGTCA site binding that has as its key feature hydrogen bonding to both G's by the small molecule. The specificity is determined by the sequence dependence of the distance between G's.
One derivative of 2-PyN exhibits pH dependent sequence specificity. At low pH, 4-dimethylaminopyridine-2-carboxamide-netropsin binds tightly to A•T sites. At high pH, 4-Me_(2)NPyN binds most tightly to the TGTCA site. In aqueous solution, this compound protonates at the pyridine nitrogen at pH 6. Thus presence of the protonated form correlates with A•T specificity.
The binding site of a class of eukaryotic transcriptional activators typified by yeast protein GCN4 and the mammalian oncogene Jun contains a strong 2-ImN binding site. Specificity requirements for the protein and small molecule are similar. GCN4 and 2-lmN bind simultaneously to the same binding site. GCN4 alters the cleavage pattern of 2-ImN-EDTA derivative at only one of its binding sites. The details of the interaction suggest that GCN4 alters the conformation of an AAAAAAA sequence adjacent to its binding site. The presence of a yeast counterpart to Jun partially blocks 2-lmN binding. The differences do not appear to be caused by direct interactions between 2-lmN and the proteins, but by induced conformational changes in the DNA protein complex. It is likely that the observed differences in complexation are involved in the varying sequence specificity of these proteins.
Resumo:
The study of the strength of a material is relevant to a variety of applications including automobile collisions, armor penetration and inertial confinement fusion. Although dynamic behavior of materials at high pressures and strain-rates has been studied extensively using plate impact experiments, the results provide measurements in one direction only. Material behavior that is dependent on strength is unaccounted for. The research in this study proposes two novel configurations to mitigate this problem.
The first configuration introduced is the oblique wedge experiment, which is comprised of a driver material, an angled target of interest and a backing material used to measure in-situ velocities. Upon impact, a shock wave is generated in the driver material. As the shock encounters the angled target, it is reflected back into the driver and transmitted into the target. Due to the angle of obliquity of the incident wave, a transverse wave is generated that allows the target to be subjected to shear while being compressed by the initial longitudinal shock such that the material does not slip. Using numerical simulations, this study shows that a variety of oblique wedge configurations can be used to study the shear response of materials and this can be extended to strength measurement as well. Experiments were performed on an oblique wedge setup with a copper impactor, polymethylmethacrylate driver, aluminum 6061-t6 target, and a lithium fluoride window. Particle velocities were measured using laser interferometry and results agree well with the simulations.
The second novel configuration is the y-cut quartz sandwich design, which uses the anisotropic properties of y-cut quartz to generate a shear wave that is transmitted into a thin sample. By using an anvil material to back the thin sample, particle velocities measured at the rear surface of the backing plate can be implemented to calculate the shear stress in the material and subsequently the strength. Numerical simulations were conducted to show that this configuration has the ability to measure the strength for a variety of materials.
Resumo:
This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.
Resumo:
Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.
This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.
This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.
The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.
The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.
Resumo:
The concept of a carbon nanotube microneedle array is explored in this thesis from multiple perspectives including microneedle fabrication, physical aspects of transdermal delivery, and in vivo transdermal drug delivery experiments. Starting with standard techniques in carbon nanotube (CNT) fabrication, including catalyst patterning and chemical vapor deposition, vertically-aligned carbon nanotubes are utilized as a scaffold to define the shape of the hollow microneedle. Passive, scalable techniques based on capillary action and unique photolithographic methods are utilized to produce a CNT-polymer composite microneedle. Specific examples of CNT-polyimide and CNT-epoxy microneedles are investigated. Further analysis of the transport properties of polymer resins reveals general requirements for applying arbitrary polymers to the fabrication process.
The bottom-up fabrication approach embodied by vertically-aligned carbon nanotubes allows for more direct construction of complex high-aspect ratio features than standard top-down fabrication approaches, making microneedles an ideal application for CNTs. However, current vertically-aligned CNT fabrication techniques only allow for the production of extruded geometries with a constant cross-sectional area, such as cylinders. To rectify this limitation, isotropic oxygen etching is introduced as a novel fabrication technique to create true 3D CNT geometry. Oxygen etching is utilized to create a conical geometry from a cylindrical CNT structure as well as create complex shape transformations in other CNT geometries.
CNT-polymer composite microneedles are anchored onto a common polymer base less than 50 µm thick, which allows for the microneedles to be incorporated into multiple drug delivery platforms, including modified hypodermic syringes and silicone skin patches. Cylindrical microneedles are fabricated with 100 µm outer diameter and height of 200-250 µm with a central cavity, or lumen, diameter of 30 µm to facilitate liquid drug flow. In vitro delivery experiments in swine skin demonstrate the ability of the microneedles to successfully penetrate the skin and deliver aqueous solutions.
An in vivo study was performed to assess the ability of the CNT-polymer microneedles to deliver drugs transdermally. CNT-polymer microneedles are attached to a hand actuated silicone skin patch that holds a liquid reservoir of drugs. Fentanyl, a potent analgesic, was administered to New Zealand White Rabbits through 3 routes of delivery: topical patch, CNT-polymer microneedles, and subcutaneous hypodermic injection. Results demonstrate that the CNT-polymer microneedles have a similar onset of action as the topical patch. CNT-polymer microneedles were also vetted as a painless delivery approach compared to hypodermic injection. Comparative analysis with contemporary microneedle designs demonstrates that the delivery achieved through CNT-polymer microneedles is akin to current hollow microneedle architectures. The inherent advantage of applying a bottom-up fabrication approach alongside similar delivery performance to contemporary microneedle designs demonstrates that the CNT-polymer composite microneedle is a viable architecture in the emerging field of painless transdermal delivery.
Resumo:
An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.
Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).
A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.
The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.
These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.
Resumo:
With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.
All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.
We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.
Resumo:
Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.
In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.
To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.
From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.
Resumo:
1. The effect of 2,2’-bis-[α-(trimethylammonium)methyl]azobenzene (2BQ), a photoisomerizable competitive antagonist, was studied at the nicotinic acetycholine receptor of Electrophorus electroplaques using voltage-jump and light-flash techniques.
2. 2BQ, at concentrations below 3 μΜ, reduced the amplitude of voltage-jump relaxations but had little effect on the voltage-jump relaxation time constants under all experimental conditions. At higher concentrations and voltages more negative than -150 mV, 2BQ caused significant open channel blockade.
3. Dose-ratio studies showed that the cis and trans isomers of 2BQ have equilibrium binding constants (K ᵢ) of .33 and 1.0 μΜ, respectively. The binding constants determined for both isomers are independent of temperature, voltage, agonist concentration, and the nature of the agonist.
4. In a solution of predominantly cis-2BQ, visible-light flashes led to a net cis→trans isomerization and caused an increase in the agonist-induced current. This increase had at least two exponential components; the larger amplitude component had the same time constant as a subsequent voltage-jump relaxation; the smaller amplitude component was investigated using ultraviolet light flashes.
5. In a solution of predominantly trans-2BQ, UV-light flashes led to a net trans→cis isomerization and caused a net decrease in the agonist-induced current. This effect had at least two exponential components. The smaller and faster component was an increase in agonist-induced current and had a similar time constant to the voltage-jump relaxation. The larger component was a slow decrease in the agonist-induced current with rate constant approximately an order of magnitude less than that of the voltage-jump relaxation. This slow component provided a measure of the rate constant for dissociation of cis-2BQ (k_ = 60/s at 20°C). Simple modelling of the slope of the dose-rate curves yields an association rate constant of 1.6 x 108/M/s. This agrees with the association rate constant of 1.8 x 108/M/s estimated from the binding constant (Ki). The Q10 of the dissociation rate constant of cis-2BQ was 3.3 between 6° and 20°C. The rate constants for association and dissociation of cis-28Q at receptors are independent of voltage, agonist concentration, and the nature of the agonist.
6. We have measured the molecular rate constants of a competitive antagonist which has roughly the same K ᵢ as d-tubocurarine but interacts more slowly with the receptor. This leads to the conclusion that curare itself has an association rate constant of 4 x 109/M/s or roughly as fast as possible for an encounter-limited reaction.
Resumo:
Morphogenesis is a phenomenon of intricate balance and dynamic interplay between processes occurring at a wide range of scales (spatial, temporal and energetic). During development, a variety of physical mechanisms are employed by tissues to simultaneously pattern, move, and differentiate based on information exchange between constituent cells, perhaps more than at any other time during an organism's life. To fully understand such events, a combined theoretical and experimental framework is required to assist in deciphering the correlations at both structural and functional levels at scales that include the intracellular and tissue levels as well as organs and organ systems. Microscopy, especially diffraction-limited light microscopy, has emerged as a central tool to capture the spatio-temporal context of life processes. Imaging has the unique advantage of watching biological events as they unfold over time at single-cell resolution in the intact animal. In this work I present a range of problems in morphogenesis, each unique in its requirements for novel quantitative imaging both in terms of the technique and analysis. Understanding the molecular basis for a developmental process involves investigating how genes and their products- mRNA and proteins-function in the context of a cell. Structural information holds the key to insights into mechanisms and imaging fixed specimens paves the first step towards deciphering gene function. The work presented in this thesis starts with the demonstration that the fluorescent signal from the challenging environment of whole-mount imaging, obtained by in situ hybridization chain reaction (HCR), scales linearly with the number of copies of target mRNA to provide quantitative sub-cellular mapping of mRNA expression within intact vertebrate embryos. The work then progresses to address aspects of imaging live embryonic development in a number of species. While processes such as avian cartilage growth require high spatial resolution and lower time resolution, dynamic events during zebrafish somitogenesis require higher time resolution to capture the protein localization as the somites mature. The requirements on imaging are even more stringent in case of the embryonic zebrafish heart that beats with a frequency of ~ 2-2.5 Hz, thereby requiring very fast imaging techniques based on two-photon light sheet microscope to capture its dynamics. In each of the hitherto-mentioned cases, ranging from the level of molecules to organs, an imaging framework is developed, both in terms of technique and analysis to allow quantitative assessment of the process in vivo. Overall the work presented in this thesis combines new quantitative tools with novel microscopy for the precise understanding of processes in embryonic development.
Resumo:
This work deals with two related areas: processing of visual information in the central nervous system, and the application of computer systems to research in neurophysiology.
Certain classes of interneurons in the brain and optic lobes of the blowfly Calliphora phaenicia were previously shown to be sensitive to the direction of motion of visual stimuli. These units were identified by visual field, preferred direction of motion, and anatomical location from which recorded. The present work is addressed to the questions: (1) is there interaction between pairs of these units, and (2) if such relationships can be found, what is their nature. To answer these questions, it is essential to record from two or more units simultaneously, and to use more than a single recording electrode if recording points are to be chosen independently. Accordingly, such techniques were developed and are described.
One must also have practical, convenient means for analyzing the large volumes of data so obtained. It is shown that use of an appropriately designed computer system is a profitable approach to this problem. Both hardware and software requirements for a suitable system are discussed and an approach to computer-aided data analysis developed. A description is given of members of a collection of application programs developed for analysis of neuro-physiological data and operated in the environment of and with support from an appropriate computer system. In particular, techniques developed for classification of multiple units recorded on the same electrode are illustrated as are methods for convenient graphical manipulation of data via a computer-driven display.
By means of multiple electrode techniques and the computer-aided data acquisition and analysis system, the path followed by one of the motion detection units was traced from open optic lobe through the brain and into the opposite lobe. It is further shown that this unit and its mirror image in the opposite lobe have a mutually inhibitory relationship. This relationship is investigated. The existence of interaction between other pairs of units is also shown. For pairs of units responding to motion in the same direction, the relationship is of an excitatory nature; for those responding to motion in opposed directions, it is inhibitory.
Experience gained from use of the computer system is discussed and a critical review of the current system is given. The most useful features of the system were found to be the fast response, the ability to go from one analysis technique to another rapidly and conveniently, and the interactive nature of the display system. The shortcomings of the system were problems in real-time use and the programming barrier—the fact that building new analysis techniques requires a high degree of programming knowledge and skill. It is concluded that computer system of the kind discussed will play an increasingly important role in studies of the central nervous system.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.
Resumo:
In this thesis we are concerned with finding representations of the algebra of SU(3) vector and axial-vector charge densities at infinite momentum (the "current algebra") to describe the mesons, idealizing the real continua of multiparticle states as a series of discrete resonances of zero width. Such representations would describe the masses and quantum numbers of the mesons, the shapes of their Regge trajectories, their electromagnetic and weak form factors, and (approximately, through the PCAC hypothesis) pion emission or absorption amplitudes.
We assume that the mesons have internal degrees of freedom equivalent to being made of two quarks (one an antiquark) and look for models in which the mass is SU(3)-independent and the current is a sum of contributions from the individual quarks. Requiring that the current algebra, as well as conditions of relativistic invariance, be satisfied turns out to be very restrictive, and, in fact, no model has been found which satisfies all requirements and gives a reasonable mass spectrum. We show that using more general mass and current operators but keeping the same internal degrees of freedom will not make the problem any more solvable. In particular, in order for any two-quark solution to exist it must be possible to solve the "factorized SU(2) problem," in which the currents are isospin currents and are carried by only one of the component quarks (as in the K meson and its excited states).
In the free-quark model the currents at infinite momentum are found using a manifestly covariant formalism and are shown to satisfy the current algebra, but the mass spectrum is unrealistic. We then consider a pair of quarks bound by a potential, finding the current as a power series in 1/m where m is the quark mass. Here it is found impossible to satisfy the algebra and relativistic invariance with the type of potential tried, because the current contributions from the two quarks do not commute with each other to order 1/m3. However, it may be possible to solve the factorized SU(2) problem with this model.
The factorized problem can be solved exactly in the case where all mesons have the same mass, using a covariant formulation in terms of an internal Lorentz group. For a more realistic, nondegenerate mass there is difficulty in covariantly solving even the factorized problem; one model is described which almost works but appears to require particles of spacelike 4-momentum, which seem unphysical.
Although the search for a completely satisfactory model has been unsuccessful, the techniques used here might eventually reveal a working model. There is also a possibility of satisfying a weaker form of the current algebra with existing models.