6 resultados para incremental EM
em CaltechTHESIS
Resumo:
This thesis presents methods for incrementally constructing controllers in the presence of uncertainty and nonlinear dynamics. The basic setting is motion planning subject to temporal logic specifications. Broadly, two categories of problems are treated. The first is reactive formal synthesis when so-called discrete abstractions are available. The fragment of linear-time temporal logic (LTL) known as GR(1) is used to express assumptions about an adversarial environment and requirements of the controller. Two problems of changes to a specification are posed that concern the two major aspects of GR(1): safety and liveness. Algorithms providing incremental updates to strategies are presented as solutions. In support of these, an annotation of strategies is developed that facilitates repeated modifications. A variety of properties are proven about it, including necessity of existence and sufficiency for a strategy to be winning. The second category of problems considered is non-reactive (open-loop) synthesis in the absence of a discrete abstraction. Instead, the presented stochastic optimization methods directly construct a control input sequence that achieves low cost and satisfies a LTL formula. Several relaxations are considered as heuristics to address the rarity of sampling trajectories that satisfy an LTL formula and demonstrated to improve convergence rates for Dubins car and single-integrators subject to a recurrence task.
Resumo:
DNA recognition is an essential biological process responsible for the regulation of cellular functions including protein synthesis and cell division and is implicated in the mechanism of action of some anticancer drugs. Studies directed towards defining the elements responsible for sequence specific DNA recognition through the study of the interactions of synthetic organic ligands with DNA are described.
DNA recognition by poly-N-methylpyrrolecarboxamides was studied by the synthesis and characterization of a series of molecules where the number of contiguous N-methylpyrrolecarboxamide units was increased from 2 to 9. The effect of this incremental change in structure on DNA recognition has been investigated at base pair resolution using affinity cleaving and MPE•Fe(II) footprinting techniques. These studies led to a quantitative relationship between the number of amides in the molecule and the DNA binding site size. This relationship is called the n + 1 rule and it states that a poly-N methylpyrrolecarboxamide molecule with n amides will bind n + 1 base pairs of DNA. This rule is consistent with a model where the carboxamides of these compounds form three center bridging hydrogen bonds between adjacent base pairs on opposite strands of the helix. The poly-N methylpyrrolecarboxamide recognition element was found to preferentially bind poly dA•poly dT stretches; however, both binding site selection and orientation were found to be affected by flanking sequences. Cleavage of large DNA is also described.
One approach towards the design of molecules that bind large sequences of double helical DNA sequence specifically is to couple DNA binding subunits of similar or diverse base pair specificity. Bis-EDTA-distamycin-fumaramide (BEDF) is an octaamide dimer of two tri-N methylpyrrolecarboxamide subunits linked by fumaramide. DNA recognition by BEDF was compared to P7E, an octaamide molecule containing seven consecutive pyrroles. These two compounds were found to recognize the same sites on pBR322 with approximately the same affinities demonstrating that fumaramide is an effective linking element for Nmethylpyrrolecarboxamide recognition subunits. Further studies involved the synthesis and characterization of a trimer of tetra-N-methylpyrrolecarboxamide subunits linked by β-alanine ((P4)_(3)E). This trimerization produced a molecule which is capable of recognizing 16 base pairs of A•T DNA, more than a turn and a half of the DNA helix.
DNA footprinting is a powerful direct method for determining the binding sites of proteins and small molecules on heterogeneous DNA. It was found that attachment of EDTA•Fe(II) to spermine creates a molecule, SE•Fe(II), which binds and cleaves DNA sequence neutrally. This lack of specificity provides evidence that at the nucleotide level polyamines recognize heterogeneous DNA independent of sequence and allows SE•Fe(II) to be used as a footprinting reagent. SE•Fe(II) was compared with two other small molecule footprinting reagents, EDTA•Fe(II) and MPE•Fe(II).
Resumo:
The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.
Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.
Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.
Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.
In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.
Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.
The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.
Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.
Resumo:
When studying physical systems, it is common to make approximations: the contact interaction is linear, the crystal is periodic, the variations occurs slowly, the mass of a particle is constant with velocity, or the position of a particle is exactly known are just a few examples. These approximations help us simplify complex systems to make them more comprehensible while still demonstrating interesting physics. But what happens when these assumptions break down? This question becomes particularly interesting in the materials science community in designing new materials structures with exotic properties In this thesis, we study the mechanical response and dynamics in granular crystals, in which the approximation of linearity and infinite size break down. The system is inherently finite, and contact interaction can be tuned to access different nonlinear regimes. When the assumptions of linearity and perfect periodicity are no longer valid, a host of interesting physical phenomena presents itself. The advantage of using a granular crystal is in its experimental feasibility and its similarity to many other materials systems. This allows us to both leverage past experience in the condensed matter physics and materials science communities while also presenting results with implications beyond the narrower granular physics community. In addition, we bring tools from the nonlinear systems community to study the dynamics in finite lattices, where there are inherently more degrees of freedom. This approach leads to the major contributions of this thesis in broken periodic systems. We demonstrate the first defect mode whose spatial profile can be tuned from highly localized to completely delocalized by simply tuning an external parameter. Using the sensitive dynamics near bifurcation points, we present a completely new approach to modifying the incremental stiffness of a lattice to arbitrary values. We show how using nonlinear defect modes, the incremental stiffness can be tuned to anywhere in the force-displacement relation. Other contributions include demonstrating nonlinear breakdown of mechanical filters as a result of finite size, and the presents of frequency attenuation bands in essentially nonlinear materials. We finish by presenting two new energy harvesting systems based on our experience with instabilities in weakly nonlinear systems.
Resumo:
These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.
More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.
The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.
Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.
Resumo:
The complex domain structure in ferroelectrics gives rise to electromechanical coupling, and its evolution (via domain switching) results in a time-dependent (i.e. viscoelastic) response. Although ferroelectrics are used in many technological applications, most do not attempt to exploit the viscoelastic response of ferroelectrics, mainly due to a lack of understanding and accurate models for their description and prediction. Thus, the aim of this thesis research is to gain better understanding of the influence of domain evolution in ferroelectrics on their dynamic mechanical response. There have been few studies on the viscoelastic properties of ferroelectrics, mainly due to a lack of experimental methods. Therefore, an apparatus and method called Broadband Electromechanical Spectroscopy (BES) was designed and built. BES allows for the simultaneous application of dynamic mechanical and electrical loading in a vacuum environment. Using BES, the dynamic stiffness and loss tangent in bending and torsion of a particular ferroelectric, viz. lead zirconate titanate (PZT), was characterized for different combinations of electrical and mechanical loading frequencies throughout the entire electric displacement hysteresis. Experimental results showed significant increases in loss tangent (by nearly an order of magnitude) and compliance during domain switching, which shows promise as a new approach to structural damping. A continuum model of the viscoelasticity of ferroelectrics was developed, which incorporates microstructural evolution via internal variables and associated kinetic relations. For the first time, through a new linearization process, the incremental dynamic stiffness and loss tangent of materials were computed throughout the entire electric displacement hysteresis for different combinations of mechanical and electrical loading frequencies. The model accurately captured experimental results. Using the understanding gained from the characterization and modeling of PZT, two applications of domain switching kinetics were explored by using Micro Fiber Composites (MFCs). Proofs of concept of set-and-hold actuation and structural damping using MFCs were demonstrated.