969 resultados para Many-electron Problem
Resumo:
Organische Ladungstransfersysteme weisen eine Vielfalt von konkurrierenden Wechselwirkungen zwischen Ladungs-, Spin- und Gitterfreiheitsgraden auf. Dies führt zu interessanten physikalischen Eigenschaften, wie metallische Leitfähigkeit, Supraleitung und Magnetismus. Diese Dissertation beschäftigt sich mit der elektronischen Struktur von organischen Ladungstransfersalzen aus drei Material-Familien. Dabei kamen unterschiedliche Photoemissions- und Röntgenspektroskopietechniken zum Einsatz. Die untersuchten Moleküle wurden z.T. im MPI für Polymerforschung synthetisiert. Sie stammen aus der Familie der Coronene (Donor Hexamethoxycoronen HMC und Akzeptor Coronen-hexaon COHON) und Pyrene (Donor Tetra- und Hexamethoxypyren TMP und HMP) im Komplex mit dem klassischen starken Akzeptor Tetracyanoquinodimethan (TCNQ). Als dritte Familie wurden Ladungstransfersalze der k-(BEDT-TTF)2X Familie (X ist ein monovalentes Anion) untersucht. Diese Materialien liegen nahe bei einem Bandbreite-kontrollierten Mottübergang im Phasendiagramm.rnFür Untersuchungen mittels Ultraviolett-Photoelektronenspektroskopie (UPS) wurden UHV-deponierte dünne Filme erzeugt. Dabei kam ein neuer Doppelverdampfer zum Einsatz, welcher speziell für Milligramm-Materialmengen entwickelt wurde. Diese Methode wies im Ladungstransferkomplex im Vergleich mit der reinen Donor- und Akzeptorspezies energetische Verschiebungen von Valenzzuständen im Bereich weniger 100meV nach. Ein wichtiger Aspekt der UPS-Messungen lag im direkten Vergleich mit ab-initio Rechnungen.rnDas Problem der unvermeidbaren Oberflächenverunreinigungen von lösungsgezüchteten 3D-Kristallen wurde durch die Methode Hard-X-ray Photoelectron Spectroscopy (HAXPES) bei Photonenenergien um 6 keV (am Elektronenspeicherring PETRA III in Hamburg) überwunden. Die große mittlere freie Weglänge der Photoelektronen im Bereich von 15 nm resultiert in echter Volumensensitivität. Die ersten HAXPES Experimente an Ladungstransferkomplexen weltweit zeigten große chemische Verschiebungen (mehrere eV). In der Verbindung HMPx-TCNQy ist die N1s-Linie ein Fingerabdruck der Cyanogruppe im TCNQ und zeigt eine Aufspaltung und einen Shift zu höheren Bindungsenergien von bis zu 6 eV mit zunehmendem HMP-Gehalt. Umgekehrt ist die O1s-Linie ein Fingerabdruck der Methoxygruppe in HMP und zeigt eine markante Aufspaltung und eine Verschiebung zu geringeren Bindungsenergien (bis zu etwa 2,5eV chemischer Verschiebung), d.h. eine Größenordnung größer als die im Valenzbereich.rnAls weitere synchrotronstrahlungsbasierte Technik wurde Near-Edge-X-ray-Absorption Fine Structure (NEXAFS) Spektroskopie am Speicherring ANKA Karlsruhe intensiv genutzt. Die mittlere freie Weglänge der niederenergetischen Sekundärelektronen (um 5 nm). Starke Intensitätsvariationen von bestimmten Vorkanten-Resonanzen (als Signatur der unbesetzte Zustandsdichte) zeigen unmittelbar die Änderung der Besetzungszahlen der beteiligten Orbitale in der unmittelbaren Umgebung des angeregten Atoms. Damit war es möglich, präzise die Beteiligung spezifischer Orbitale im Ladungstransfermechanismus nachzuweisen. Im genannten Komplex wird Ladung von den Methoxy-Orbitalen 2e(Pi*) und 6a1(σ*) zu den Cyano-Orbitalen b3g und au(Pi*) und – in geringerem Maße – zum b1g und b2u(σ*) der Cyanogruppe transferiert. Zusätzlich treten kleine energetische Shifts mit unterschiedlichem Vorzeichen für die Donor- und Akzeptor-Resonanzen auf, vergleichbar mit den in UPS beobachteten Shifts.rn
Resumo:
Cytochrom c Oxidase (CcO), der Komplex IV der Atmungskette, ist eine der Häm-Kupfer enthaltenden Oxidasen und hat eine wichtige Funktion im Zellmetabolismus. Das Enzym enthält vier prosthetische Gruppen und befindet sich in der inneren Membran von Mitochondrien und in der Zellmembran einiger aerober Bakterien. Die CcO katalysiert den Elektronentransfer (ET) von Cytochrom c zu O2, wobei die eigentliche Reaktion am binuklearen Zentrum (CuB-Häm a3) erfolgt. Bei der Reduktion von O2 zu zwei H2O werden vier Protonen verbraucht. Zudem werden vier Protonen über die Membran transportiert, wodurch eine elektrochemische Potentialdifferenz dieser Ionen zwischen Matrix und Intermembranphase entsteht. Trotz ihrer Wichtigkeit sind Membranproteine wie die CcO noch wenig untersucht, weshalb auch der Mechanismus der Atmungskette noch nicht vollständig aufgeklärt ist. Das Ziel dieser Arbeit ist, einen Beitrag zum Verständnis der Funktion der CcO zu leisten. Hierzu wurde die CcO aus Rhodobacter sphaeroides über einen His-Anker, der am C-Terminus der Untereinheit II angebracht wurde, an eine funktionalisierte Metallelektrode in definierter Orientierung gebunden. Der erste Elektronenakzeptor, das CuA, liegt dabei am nächsten zur Metalloberfläche. Dann wurde eine Doppelschicht aus Lipiden insitu zwischen die gebundenen Proteine eingefügt, was zur sog. proteingebundenen Lipid-Doppelschicht Membran (ptBLM) führt. Dabei musste die optimale Oberflächenkonzentration der gebundenen Proteine herausgefunden werden. Elektrochemische Impedanzspektroskopie(EIS), Oberflächenplasmonenresonanzspektroskopie (SPR) und zyklische Voltammetrie (CV) wurden angewandt um die Aktivität der CcO als Funktion der Packungsdichte zu charakterisieren. Der Hauptteil der Arbeit betrifft die Untersuchung des direkten ET zur CcO unter anaeroben Bedingungen. Die Kombination aus zeitaufgelöster oberflächenverstärkter Infrarot-Absorptionsspektroskopie (tr-SEIRAS) und Elektrochemie hat sich dafür als besonders geeignet erwiesen. In einer ersten Studie wurde der ET mit Hilfe von fast scan CV untersucht, wobei CVs von nicht-aktivierter sowie aktivierter CcO mit verschiedenen Vorschubgeschwindigkeiten gemessen wurden. Die aktivierte Form wurde nach dem katalytischen Umsatz des Proteins in Anwesenheit von O2 erhalten. Ein vier-ET-modell wurde entwickelt um die CVs zu analysieren. Die Methode erlaubt zwischen dem Mechanismus des sequentiellen und des unabhängigen ET zu den vier Zentren CuA, Häm a, Häm a3 und CuB zu unterscheiden. Zudem lassen sich die Standardredoxpotentiale und die kinetischen Koeffizienten des ET bestimmen. In einer zweiten Studie wurde tr-SEIRAS im step scan Modus angewandt. Dafür wurden Rechteckpulse an die CcO angelegt und SEIRAS im ART-Modus verwendet um Spektren bei definierten Zeitscheiben aufzunehmen. Aus diesen Spektren wurden einzelne Banden isoliert, die Veränderungen von Vibrationsmoden der Aminosäuren und Peptidgruppen in Abhängigkeit des Redoxzustands der Zentren zeigen. Aufgrund von Zuordnungen aus der Literatur, die durch potentiometrische Titration der CcO ermittelt wurden, konnten die Banden versuchsweise den Redoxzentren zugeordnet werden. Die Bandenflächen gegen die Zeit aufgetragen geben dann die Redox-Kinetik der Zentren wieder und wurden wiederum mit dem vier-ET-Modell ausgewertet. Die Ergebnisse beider Studien erlauben die Schlussfolgerung, dass der ET zur CcO in einer ptBLM mit größter Wahrscheinlichkeit dem sequentiellen Mechanismus folgt, was dem natürlichen ET von Cytochrom c zur CcO entspricht.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
This thesis aims at connecting structural and functional changes of complex soft matter systems due to external stimuli with non-covalent molecular interaction profiles. It addresses the problem of elucidating non-covalent forces as structuring principle of mainly polymer-based systems in solution. The structuring principles of a wide variety of complex soft matter types are analyzed. In many cases this is done by exploring conformational changes upon the exertion of external stimuli. The central question throughout this thesis is how a certain non-covalent interaction profile leads to solution condition-dependent structuring of a polymeric system.rnTo answer this question, electron paramagnetic resonance (EPR) spectroscopy is chosen as the main experimental method for the investigation of the structure principles of polymers. With EPR one detects only the local surroundings or environments of molecules that carry an unpaired electron. Non-covalent forces are normally effective on length scales of a few nanometers and below. Thus, EPR is excellently suited for their investigations. It allows for detection of interactions on length scales ranging from approx. 0.1 nm up to 10 nm. However, restriction to only one experimental technique likely leads to only incomplete pictures of complex systems. Therefore, the presented studies are frequently augmented with further experimental and computational methods in order to yield more comprehensive descriptions of the systems chosen for investigation.rnElectrostatic correlation effects in non-covalent interaction profiles as structuring principles in colloid-like ionic clusters and DNA condensation are investigated first. Building on this it is shown how electrostatic structuring principles can be combined with hydrophobic ones, at the example of host-guest interactions in so-called dendronized polymers (denpols).rnSubsequently, the focus is shifted from electrostatics in dendronized polymers to thermoresponsive alkylene oxide-based materials, whose structuring principles are based on hydrogen bonds and counteracting hydrophobic interactions. The collapse mechanism in dependence of hydrophilic-hydrophobic balance and topology of these polymers is elucidated. Complementarily the temperature-dependent phase behavior of elastin-like polypeptides (ELPs) is investigated. ELPs are the first (and so far only) class of compounds that is shown to feature a first-order inverse phase transition on nanoscopic length scales.rnFinally, this thesis addresses complex biological systems, namely intrinsically disordered proteins (IDPs). It is shown that the conformational space of the IDPs Osteopontin (OPN), a cytokine involved in metastasis of several kinds of cancer, and BASP1 (brain acid soluble protein one), a protein associated with neurite outgrowth, is governed by a subtle interplay between electrostatic forces, hydrophobic interaction, system entropy and hydrogen bonds. Such, IDPs can even sample cooperatively folded structures, which have so far only been associated with globular proteins.
Resumo:
Das Institut für Kernphysik der Universität Mainz betreibt seit 1990 eine weltweit einzigartige Beschleunigeranlage für kern- und teilchenphysikalische Experimente – das Mainzer Mikrotron (MAMI-B). Diese Beschleunigerkaskade besteht aus drei Rennbahn-Mikrotrons (RTMs) mit Hochfrequenzlinearbeschleunigern bei 2.45 GHz, mit denen ein quasi kontinuierlicher Elektronenstrahl von bis zu 100 μA auf 855MeV beschleunigt werden kann.rnrnIm Jahr 1999 wurde die Umsetzung der letzten Ausbaustufe – ein Harmonisches Doppelseitiges Mikrotron (HDSM, MAMI-C) – mit einer Endenergie von 1.5 GeV begonnen. Die Planung erforderte einige mutige Schritte, z.B. Umlenkmagnete mit Feldgradient und ihren daraus resultierenden strahloptischen Eigenschaften, die einen großen Einfluss auf die Longitudinaldynamik des Beschleunigers haben. Dies erforderte die Einführung der „harmonischen“ Betriebsweise mit zwei Frequenzen der zwei Linearbeschleuniger.rnrnViele Maschinenparameter (wie z.B. HF-Amplituden oder -Phasen) wirken direkt auf den Beschleunigungsprozess ein, ihre physikalischen Größen sind indes nicht immer auf einfache Weise messtechnisch zugänglich. Bei einem RTM mit einer verhältnismäßig einfachen und wohldefinierten Strahldynamik ist das im Routinebetrieb unproblematisch, beim HDSM hingegen ist schon allein wegen der größeren Zahl an Parametern die Kenntnis der physikalischen Größen von deutlich größerer Bedeutung. Es gelang im Rahmen dieser Arbeit, geeignete Methoden der Strahldiagnose zu entwickeln, mit denen diese Maschinenparameter überprüft und mit den Planungsvorgaben verglichen werden können.rnrnDa die Anpassung des Maschinenmodells an eine einzelne Phasenmessung aufgrund der unvermeidlichen Messfehler nicht immer eindeutige Ergebnisse liefert, wird eine Form der Tomographie verwendet. Der longitudinale Phasenraum wird dann in Form einer Akzeptanzmessung untersucht. Anschließend kann ein erweitertes Modell an die gewonnene Datenvielfalt angepasst werden, wodurch eine größere Signifikanz der Modellparameter erreicht wird.rnrnDie Ergebnisse dieser Untersuchungen zeigen, dass sich der Beschleuniger als Gesamtsystem im Wesentlichen wie vorhergesagt verhält und eine große Zahl unterschiedlicher Konfigurationen zum Strahlbetrieb möglich sind – im Routinebetrieb wird dies jedoch vermieden und eine bewährte Konfiguration für die meisten Situationen eingesetzt. Das führt zu einer guten Reproduzierbarkeit z.B. der Endenergie oder des Spinpolarisationswinkels an den Experimentierplätzen.rnrnDie Erkenntnisse aus diesen Untersuchungen wurden teilweise automatisiert, so dass nun den Operateuren zusätzliche und hilfreiche Diagnose zur Verfügung steht, mit denen der Maschinenbetrieb noch zuverlässiger durchgeführt werden kann.
Resumo:
Although duodenopancreatectomy has been standardized for many years, the pathological examination of the specimen was re-described in the last years. In methodical pathological studies up to 85% had an R1 margin.1,2 These mainly involved the posterior und medial resection margin.3 As a consequence we need to optimize and standardize the pathological workup of the specimen and to extend the surgical resection, where possible without risk for the patient.
Resumo:
Transmission electron microscopy has provided most of what is known about the ultrastructural organization of tissues, cells, and organelles. Due to tremendous advances in crystallography and magnetic resonance imaging, almost any protein can now be modeled at atomic resolution. To fully understand the workings of biological "nanomachines" it is necessary to obtain images of intact macromolecular assemblies in situ. Although the resolution power of electron microscopes is on the atomic scale, in biological samples artifacts introduced by aldehyde fixation, dehydration and staining, but also section thickness reduces it to some nanometers. Cryofixation by high pressure freezing circumvents many of the artifacts since it allows vitrifying biological samples of about 200 mum in thickness and immobilizes complex macromolecular assemblies in their native state in situ. To exploit the perfect structural preservation of frozen hydrated sections, sophisticated instruments are needed, e.g., high voltage electron microscopes equipped with precise goniometers that work at low temperature and digital cameras of high sensitivity and pixel number. With them, it is possible to generate high resolution tomograms, i.e., 3D views of subcellular structures. This review describes theory and applications of the high pressure cryofixation methodology and compares its results with those of conventional procedures. Moreover, recent findings will be discussed showing that molecular models of proteins can be fitted into depicted organellar ultrastructure of images of frozen hydrated sections. High pressure freezing of tissue is the base which may lead to precise models of macromolecular assemblies in situ, and thus to a better understanding of the function of complex cellular structures.
Resumo:
Undergraduate education has a historical tradition of preparing students to meet the problem-solving challenges they will encounter in work, civic, and personal contexts. This thesis research was conducted to study the role of rhetoric in engineering problem solving and decision making and to pose pedagogical strategies for preparing undergraduate students for workplace problem solving. Exploratory interviews with engineering managers as well as the heuristic analyses of engineering A3 project planning reports suggest that Aristotelian rhetorical principles are critical to the engineer's success: Engineers must ascertain the rhetorical situation surrounding engineering problems; apply and adapt invention heuristics to conduct inquiry; draw from their investigation to find innovative solutions; and influence decision making by navigating workplace decision-making systems and audiences using rhetorically constructed discourse. To prepare undergraduates for workplace problem solving, university educators are challenged to help undergraduates understand the exigence and realize the kairotic potential inherent in rhetorical problem solving. This thesis offers pedagogical strategies that focus on mentoring learning communities in problem-posing experiences that are situated in many disciplinary, work, and civic contexts. Undergraduates build a flexible rhetorical technê for problem solving as they navigate the nuances of relevant problem-solving systems through the lens of rhetorical practice.
Resumo:
Since the nineteenth century invention of adolescence, young people have been consistently identified as social problems in western societies. Their contemporary status as a focus of fear and anxiety is, in that sense, nothing new. In this paper, I try to combine this sense of historical recurrence about the youth problem with some questions about what is different about the present – asking what is distinctive about the shape of the youth problem now? This is a difficult balance to strike, and what I have to say will probably lean more towards an emphasis on the historical conditions and routes of the youth problem. That balance reflects my own orientations and knowledge (I am not expert on the contemporary conditions of being young). But it also arises from my belief that much contemporary social science is profoundly forgetful. An enthusiasm for stressing the newness, or novelty, of the present connects many varieties of contemporary scholarship. One result is the construction of what Janet Fink and I have referred to as ‘sociological time’ in which
Resumo:
In reverse logistics networks, products (e.g., bottles or containers) have to be transported from a depot to customer locations and, after use, from customer locations back to the depot. In order to operate economically beneficial, companies prefer a simultaneous delivery and pick-up service. The resulting Vehicle Routing Problem with Simultaneous Delivery and Pick-up (VRPSDP) is an operational problem, which has to be solved daily by many companies. We present two mixed-integer linear model formulations for the VRPSDP, namely a vehicle-flow and a commodity-flow model. In order to strengthen the models, domain-reducing preprocessing techniques, and effective cutting planes are outlined. Symmetric benchmark instances known from the literature as well as new asymmetric instances derived from real-world problems are solved to optimality using CPLEX 12.1.
Resumo:
We have recently developed a method to obtain distributed atomic polarizabilities adopting a partitioning of the molecular electron density (for example, the Quantum Theory of Atoms in Molecules, [1]), calculated with or without an applied electric field. The procedure [2] allows to obtained atomic polarizability tensors, which are perfectly exportable, because quite representative of an atom in a given functional group. Among the many applications of this idea, the calculation of crystal susceptibility is easily available, either from a rough estimation (the polarizability of the isolated molecule is used) or from a more precise estimation (the polarizability of a molecule embedded in a cluster representing the first coordination sphere is used). Lorentz factor is applied to include the long range effect of packing, which is enhancing the molecular polarizability. Simple properties like linear refractive index or the gyration tensor can be calculated at relatively low costs and with good precision. This approach is particularly useful within the field of crystal engineering of organic/organometallic materials, because it would allow a relatively easy prediction of a property as a function of the packing, thus allowing "reverse crystal engineering". Examples of some amino acid crystals and salts of amino acids [3] will be illustrated, together with other crystallographic or non-crystallographic applications. For example, the induction and dispersion energies of intermolecular interactions could be calculated with superior precision (allowing anisotropic van der Waals interactions). This could allow revision of some commonly misunderstood intermolecular interactions, like the halogen bonding (see for example the recent remarks by Stone or Gilli [4]). Moreover, the chemical reactivity of coordination complexes could be reinvestigated, by coupling the conventional analysis of the electrostatic potential (useful only in the circumstances of hard nucleophilic/electrophilic interaction) with the distributed atomic polarizability. The enhanced reactivity of coordinated organic ligands would be better appreciated. [1] R. F. W. Bader, Atoms in Molecules: A Quantum Theory. Oxford Univ. Press, 1990. [2] A. Krawczuk-Pantula, D. Pérez, K. Stadnicka, P. Macchi, Trans. Amer. Cryst. Ass. 2011, 1-25 [3] A. S. Chimpri1, M. Gryl, L. H.R. Dos Santos1, A. Krawczuk, P. Macchi Crystal Growth & Design, in the press. [4] a) A. J. Stone, J. Am. Chem. Soc. 2013, 135, 7005−7009; b) V. Bertolasi, P. Gilli, G. Gilli Crystal Growth & Design, 2013, 12, 4758-4770.
Resumo:
Cytoplasmic polyhedrosis virus (CPV) is unique within the Reoviridae family in having a turreted single-layer capsid contained within polyhedrin inclusion bodies, yet being fully capable of cell entry and endogenous RNA transcription. Biochemical data have shown that the amino-terminal 79 residues of the CPV turret protein (TP) is sufficient to bring CPV or engineered proteins into the polyhedrin matrix for micro-encapsulation. Here we report the three-dimensional structure of CPV at 3.88 A resolution using single-particle cryo-electron microscopy. Our map clearly shows the turns and deep grooves of alpha-helices, the strand separation in beta-sheets, and densities for loops and many bulky side chains; thus permitting atomic model-building effort from cryo-electron microscopy maps. We observed a helix-to-beta-hairpin conformational change between the two conformational states of the capsid shell protein in the region directly interacting with genomic RNA. We have also discovered a messenger RNA release hole coupled with the mRNA capping machinery unique to CPV. Furthermore, we have identified the polyhedrin-binding domain, a structure that has potential in nanobiotechnology applications.
Resumo:
The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry.^ A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems. ^