888 resultados para Combining ability
Resumo:
To better understand human diseases, much recent work has focused on proteins to either identify disease targets through proteomics or produce therapeutics via protein engineering. Noncanonical amino acids (ncAAs) are tools for altering the chemical and physical properties of proteins, providing a facile strategy not only to label proteins but also to engineer proteins with novel properties. My thesis research has focused on the development and applications of noncanonical amino acids in identifying, imaging, and engineering proteins for studying human diseases. Chapter 1 introduces the concept of ncAAs and reveals insights to how I chose my thesis projects.
ncAAs have been incorporated to tag and enrich newly synthesized proteins for mass spectrometry through a method termed BONCAT, or bioorthogonal noncanonical amino acid tagging. Chapter 2 describes the investigation of the proteomic response of human breast cancer cells to induced expression of tumor suppressor microRNA miR-126 by combining BONCAT with another proteomic method, SILAC or stable isotope labeling by amino acids in cell culture. This proteomic analysis led to the discovery of a direct target of miR-126, shedding new light on its role in suppressing cancer metastasis.
In addition to mass spectrometry, ncAAs can also be utilized to fluorescently label proteins. Chapter 3 details the synthesis of a set of cell-permeant cyclooctyne probes and demonstration of selective labeling of newly synthesized proteins in live mammalian cells using azidohomoalanine. Similar to live cell imaging, the ability to selectively label a particular cell type within a mixed cell population is important to interrogating many biological systems, such as tumor microenvironments. By taking advantage of the metabolic differences between cancer and normal cells, Chapter 5 discusses efforts to develop selective labeling of cancer cells using a glutamine analogue.
Furthermore, Chapter 4 describes the first demonstration of global replacement at polar amino acid positions and its application in developing an alternative PEGylation strategy for therapeutic proteins. Polar amino acids typically occupy solvent-exposed positions on the protein surface, and incorporation of noncanonical amino acids at these positions should allow easier modification and cause less perturbation compared to replacements at the interior positions of proteins.
Resumo:
Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.
In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.
We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.
Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.
Resumo:
21 p.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
A scheme using a lens array and the technique of spectral dispersion is presented to improve target illumination uniformity in laser produced plasmas. Detailed two-dimensional simulation shows that a quasi-near-field target pattern, of steeper edges and without side lobes, is achieved with a lens array, while interference stripes inside the pattern are smoothed out by the use of the spectral dispersion technique. Moving the target slightly from the exact focal plane of the principal focusing lens can eliminate middle-scale-length intensity fluctuation further. Numerical results indicate that a well-irradiated laser spot with small nonuniformity and great energy efficiency can be obtained in this scheme. (c) 2007 American Institute of Physics.
Resumo:
为满足激光惯性约束聚变中靶面激光辐照不均匀性低于5%的要求, 在目前使用透镜列阵基础上, 提出了谱色散平滑与透镜列阵联用方案, 对其进行数值计算并分析其平滑效果和应用可行性。结果表明:焦斑的不均匀性从单独使用透镜列阵时的14%降低到与谱色散平滑结合后的3%;对焦斑点功率谱的分析表明谱色散平滑通过抑制焦斑中高频的频谱强度达到平滑效果。该方案可以进一步提高焦斑平滑效果, 计算结果对实际应用有着重要的参考意义。
Resumo:
Based on numerous pharmacological studies that have revealed an interaction between cannabinoid and opioid systems at the molecular, neurochemical, and behavioral levels, a new series of hybrid molecules has been prepared by coupling the molecular features of two well-known drugs, ie, rimonabant and fentanyl. The new compounds have been tested for their affinity and functionality regarding CB1 and CB2 cannabinoid and mu opioid receptors. In [S-35]-GTP.S (guanosine 5'-O-[gamma-thio] triphosphate) binding assays from the post-mortem human frontal cortex, they proved to be CB1 cannabinoid antagonists and mu opioid antagonists. Interestingly, in vivo, the new compounds exhibited a significant dual antagonist action on the endocannabinoid and opioid systems.
Resumo:
Rockfishes (Sebastes spp.) are an important component of North Pacific marine ecosystems and commercial fisheries. Because the rocky, high-relief substrate that rockfishes often inhabit is inaccessible to standard survey trawls, population abundance assessments for many rockfish species are difficult. As part of a large study to classify substrate and compare complementary sampling tools, we investigated the feasibility of using an acoustic survey in conjunction with a lowered stereo-video camera, a remotely operated vehicle, and a modified bottom trawl to estimate rockfish biomass in untrawlable habitat. The Snakehead Bank south of Kodiak Island, Alaska, was surveyed repeatedly over 4 days and nights. Dusky rockfish (S. variabilis), northern rockfish (S. polyspinis), and harlequin rockfish (S. variegatus) were the most abundant species observed on the bank. Backscatter attributed to rockfish were collected primarily near the seafloor at a mean height off the bottom of 1.5 m. Total rockfish backscatter and the height of backscatter off the bottom did not differ among survey passes or between night and day. Biomass estimates for the 41 square nautical-mile area surveyed on this small, predominantly untrawlable bank were 2350 metric tons (t) of dusky rockfish, 331 t of northern rockfish, and 137 t of harlequin rockfish. These biomass estimates are 5–60 times the density estimated for these rockfish species by a regularly conducted bottom trawl survey covering the bank and the surrounding shelf. This finding shows that bottom trawl surveys can underestimate the abundance of rockfishes in untrawlable areas and, therefore, may underestimate overall population abundance for these species.
Resumo:
Ecosystem-based management is one of many indispensable components of objective, holistic management of human impacts on nonhuman systems. By itself, however, ecosystem-based management carries the same risks we face with other forms of current management; holism requires more. Combining single-species and ecosystem approaches represents progress. However, it is now recognized that management also needs to be evosystem-based. In other words, management needs to account for all coevolutionary and evolutionary interactions among all species; otherwise we fall far short of holism. Fully holistic practices are quite distinct from the approaches to the management of fisheries that are applied today. In this paper, we show how macroecological patterns can guide management consistently, objectively, and holistically. We present one particular macroecological pattern with two applications. The first application is a case study of fisheries from the Baltic Sea involving historical data for two species; the second involves a sample of 44 species of primarily marine fish worldwide. In both cases we evaluate historical fishing rates and determine holistic/systemic sustainable single-species fishing rates to illustrate that conventional fisheries management leads to much more extensive and pervasive overfishing than currently realized; harvests are, on average, over twenty-fold too large to be fully sustainable. In general, our approach involves not only the sustainability of fisheries and related resources but also the sustainability of the ecosystems and evosystems in which they occur. Using macroecological patterns accomplishes four important goals: 1) Macroecology becomes one of the interdisciplinary components of management. 2) Sustainability becomes an option for harvests from populations of individual species, species groups, ecosystems, and the entire marine environment. 3) Policies and goals are reality-based, holistic, or fully systemic; they account for ecological as well as evolutionary factors and dynamics (including management itself). 4) Numerous management questions can be addressed.
Resumo:
Understanding the phase and timing of ontogenetic habitat shifts underlies the study of a species’ life history and population dynamics. This information is especially critical to the conservation and management of threatened and endangered species, such as the loggerhead sea turtle Caretta caretta. The early life of loggerheads consists of a terrestrial egg and hatchling stage, a posthatchling and juvenile oceanic, pelagic feeding stage, and a juvenile neritic, primarily benthic feeding stage. In the present study, novel approaches were applied to explore the timing of the loggerhead ontogenetic shift from pelagic to benthic habitats. The most recent years of somatic growth are recorded as annual marks in humerus cross sections. A consistent growth mark pattern in benthic juvenile loggerheads was identified, with narrow growth marks in the interior of the bone transitioning to wider growth marks at the exterior, indicative of a sharp increase in growth rates at the transitional growth mark. This increase in annual growth is hypothesized to correlate with the ontogenetic shift from pelagic to benthic habitats. Stable isotopes of carbon and nitrogen just interior and exterior to the transitional growth mark, as well as stable isotopes from pelagic and benthic flora, fauna and loggerhead stomach contents, were analyzed to determine whether this transition related to a diet shift. The results clearly indicate that a dietary shift from oceanic/pelagic to neritic/benthic feeding corresponds to a transitional growth mark. The combination of stable isotope analysis with skeletochronology can elucidate the ecology of cryptic life history stages during loggerhead ontogeny.