983 resultados para Complete Characteristic Operator-Function
Resumo:
Background-The importance of complete revascularization remains unclear and contradictory. This current investigation compares the effect of complete revascularization on 10-year survival of patients with stable multivessel coronary artery disease (CAD) who were randomly assigned to percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG). Methods and Results-This is a post hoc analysis of the Second Medicine, Angioplasty, or Surgery Study (MASS II), which is a randomized trial comparing treatments in patients with stable multivessel CAD, and preserved systolic ventricular function. We analyzed patients who underwent surgery (CABG) or stent angioplasty (PCI). The survival free of overall mortality of patients who underwent complete (CR) or incomplete revascularization (IR) was compared. Of the 408 patients randomly assigned to mechanical revascularization, 390 patients (95.6%) underwent the assigned treatment; complete revascularization was achieved in 224 patients (57.4%), 63.8% of those in the CABG group and 36.2% in the PCI group (P = 0.001). The IR group had more prior myocardial infarction than the CR group (56.2% X 39.2%, P = 0.01). During a 10-year follow-up, the survival free of cardiovascular mortality was significantly different among patients in the 2 groups (CR, 90.6% versus IR, 84.4%; P = 0.04). This was mainly driven by an increased cardiovascular specific mortality in individuals with incomplete revascularization submitted to PCI (P = 0.05). Conclusions-Our study suggests that in 10-year follow-up, CR compared with IR was associated with reduced cardiovascular mortality, especially due to a higher increase in cardiovascular-specific mortality in individuals submitted to PCI.
Resumo:
In Kantor and Trishin (1997) [3], Kantor and Trishin described the algebra of polynomial invariants of the adjoint representation of the Lie superalgebra gl(m vertical bar n) and a related algebra A, of what they called pseudosymmetric polynomials over an algebraically closed field K of characteristic zero. The algebra A(s) was investigated earlier by Stembridge (1985) who in [9] called the elements of A(s) supersymmetric polynomials and determined generators of A(s). The case of positive characteristic p of the ground field K has been recently investigated by La Scala and Zubkov (in press) in [6]. We extend their work and give a complete description of generators of polynomial invariants of the adjoint action of the general linear supergroup GL(m vertical bar n) and generators of A(s).
Resumo:
Although it is known that obesity, diabetes, and Kawasaki's disease play important roles in systemic inflammation and in the development of both endothelial dysfunction and cardiomyopathy, there is a lack of data regarding the endothelial function of pre-pubertal children suffering from cardiomyopathy. In this study, we performed a systematic review of the literature on pre-pubertal children at risk of developing cardiomyopathy to assess the endothelial function of pre-pubertal children at risk of developing cardiomyopathy. We searched the published literature indexed in PubMed, Bireme and SciELO using the keywords 'endothelial', 'children', 'pediatric' and 'infant' and then compiled a systematic review. The end points were age, the pubertal stage, sex differences, the method used for the endothelial evaluation and the endothelial values themselves. No studies on children with cardiomyopathy were found. Only 11 papers were selected for our complete analysis, where these included reports on the flow-mediated percentage dilatation, the values of which were 9.80±1.80, 5.90±1.29, 4.50±0.70, and 7.10±1.27 for healthy, obese, diabetic and pre-pubertal children with Kawasaki's disease, respectively. There was no significant difference in the dilatation, independent of the endothelium, either among the groups or between the genders for both of the measurements in children; similar results have been found in adolescents and adults. The endothelial function in cardiomyopathic children remains unclear because of the lack of data; nevertheless, the known dysfunctions in children with obesity, type 1 diabetes and Kawasaki's disease may influence the severity of the cardiovascular symptoms, the prognosis, and the mortality rate. The results of this study encourage future research into the consequences of endothelial dysfunction in pre-pubertal children.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
Gliazellen kommen in allen höheren Organismen vor und sind sowohl für die korrekte Entwicklung, als auch für die Funktionalität des adulten Nervensystems unerlässlich. Eine der mannigfachen Funktionen dieses Zelltyps ist die Umhüllung von Axonen im zentralen und peripheren Nervensystem (ZNS und PNS). Um eine vollständige Umhüllung zu gewährleisten, wandern Gliazellen während der Neurogenese zum Teil über enorme Distanzen von ihrem Entstehungsort aus. Dies trifft insbesondere auf die Gliazellen zu, durch deren Membranausläufer die distalen Axonbereiche der peripheren Nerven isoliert werden.rnIn dieser Arbeit wurde die Migration von Gliazellen anhand des Modelorganismus Drosophila untersucht. Ein besonderes Interesse galt dabei der Wanderung einer distinkten Population von Gliazellen, den sogenannten embryonalen Peripheren Gliazellen (ePG). Die ePGs werden überwiegend im sich entwickelnden ventralen Bauchmark geboren und wandern anschließend entlang der peripheren Nerventrakte nach dorsal aus, um diese bis zum Ende der Embryogenese zu umhüllen und dadurch die gliale Blut-Nerv-Schranke zu etablieren. Das Hauptziel dieser Arbeit bestand darin, neue Faktoren bzw. Mechanismen aufzudecken, durch welche die Migration der ePGs reguliert wird. Dazu wurde zunächst der wildtypische Verlauf ihrer Wanderung detailliert analysiert. Es stellte sich heraus, dass in jedem abdominalen Hemisegment eine invariante Anzahl von 12 ePGs von distinkten neuralen Vorläuferzellen generiert wird, die individuelle Identitäten besitzen und mittels molekularer Marker auf Einzelzellebene identifiziert werden können. Basierend auf der charakteristischen Lage der Zellen erfolgte die Etablierung einer neuen, konsistenten Nomenklatur für sämtliche ePGs. Darüber hinaus offenbarten in vivo Migrationsanalysen, dass die Wanderung individueller ePGs stereotyp verläuft und demzufolge weitestgehend prädeterminiert ist. Die genaue Kenntnis der wildtypischen ePG Migration auf Einzelzellebene diente anschließend als Grundlage für detaillierte Mutantenanalysen. Anhand derer konnte für den ebenfalls als molekularen Marker verwendeten Transkriptionsfaktor Castor eine Funktion als zellspezifische Determinante für die korrekte Spezifizierung der ePG6 und ePG8 nachgewiesen werden, dessen Verlust in einem signifikanten Migrationsdefekt dieser beiden ePGs resultiert. Des Weiteren konnte mit Netrin (NetB) der erste diffusible und richtungsweisende Faktor für die Migration von ePGs enthüllt werden, der in Interaktion mit dem Rezeptor Uncoordinated5 speziell die Wanderung der ePG6 und ePG8 leitet. Die von den übrigen Gliazellen unabhängige Navigation der ePG6 und ePG8 belegt, dass zumindest die Migration von Gruppen der ePGs durch unterschiedliche Mechanismen kontrolliert wird, was durch die Resultate der durchgeführten Ablationsexperimente bestätigt wird. rnFerner konnte gezeigt werden, dass während der frühen Gliogenese eine zuvor unbekannte, von Neuroblasten bereitgestellte Netrinquelle an der initialen Wegfindung der Longitudinalen Gliazellen (eine Population Neuropil-assoziierter Gliazellen im ZNS) beteiligt ist. In diesem Kontext erfolgt die Signaldetektion bereits in deren Vorläuferzelle, dem Longitudinalen Glioblasten, zellautonom über den Rezeptor Frazzled. rnFür künftige Mutantenscreens zur Identifizierung weiterer an der Migration der ePGs beteiligter Faktoren stellt die in dieser Arbeit präsentierte detaillierte Beschreibung eine wichtige Grundlage dar. Speziell in Kombination mit den vorgestellten molekularen Markern liefert sie die Voraussetzung dafür, individuelle ePGs auch im mutanten Hintergrund zu erfassen, wodurch selbst subtile Phänotypen überhaupt erst detektiert und auf Einzelzellebene analysiert werden können. Aufgrund der aufgezeigten voneinander unabhängigen Wegfindung, erscheinen Mutantenanalysen ohne derartige Möglichkeiten wenig erfolgversprechend, da Mutationen vermutlich mehrheitlich die Migration einzelner oder weniger ePGs beeinträchtigen. Letzten Endes wird somit die Aussicht verbessert, weitere neuartige Migrationsfaktoren im Modellorganismus Drosophila zu entschlüsseln, die gegebenenfalls bis hin zu höheren Organismen konserviert sind und folglich zum Verständnis der Gliazellwanderung in Vertebraten beitragen.
Resumo:
Sei $\pi:X\rightarrow S$ eine \"uber $\Z$ definierte Familie von Calabi-Yau Varietaten der Dimension drei. Es existiere ein unter dem Gauss-Manin Zusammenhang invarianter Untermodul $M\subset H^3_{DR}(X/S)$ von Rang vier, sodass der Picard-Fuchs Operator $P$ auf $M$ ein sogenannter {\em Calabi-Yau } Operator von Ordnung vier ist. Sei $k$ ein endlicher K\"orper der Charaktetristik $p$, und sei $\pi_0:X_0\rightarrow S_0$ die Reduktion von $\pi$ \uber $k$. F\ur die gew\ohnlichen (ordinary) Fasern $X_{t_0}$ der Familie leiten wir eine explizite Formel zur Berechnung des charakteristischen Polynoms des Frobeniusendomorphismus, des {\em Frobeniuspolynoms}, auf dem korrespondierenden Untermodul $M_{cris}\subset H^3_{cris}(X_{t_0})$ her. Sei nun $f_0(z)$ die Potenzreihenl\osung der Differentialgleichung $Pf=0$ in einer Umgebung der Null. Da eine reziproke Nullstelle des Frobeniuspolynoms in einem Teichm\uller-Punkt $t$ durch $f_0(z)/f_0(z^p)|_{z=t}$ gegeben ist, ist ein entscheidender Schritt in der Berechnung des Frobeniuspolynoms die Konstruktion einer $p-$adischen analytischen Fortsetzung des Quotienten $f_0(z)/f_0(z^p)$ auf den Rand des $p-$adischen Einheitskreises. Kann man die Koeffizienten von $f_0$ mithilfe der konstanten Terme in den Potenzen eines Laurent-Polynoms, dessen Newton-Polyeder den Ursprung als einzigen inneren Gitterpunkt enth\alt, ausdr\ucken,so beweisen wir gewisse Kongruenz-Eigenschaften unter den Koeffizienten von $f_0$. Diese sind entscheidend bei der Konstruktion der analytischen Fortsetzung. Enth\alt die Faser $X_{t_0}$ einen gew\ohnlichen Doppelpunkt, so erwarten wir im Grenz\ubergang, dass das Frobeniuspolynom in zwei Faktoren von Grad eins und einen Faktor von Grad zwei zerf\allt. Der Faktor von Grad zwei ist dabei durch einen Koeffizienten $a_p$ eindeutig bestimmt. Durchl\auft nun $p$ die Menge aller Primzahlen, so erwarten wir aufgrund des Modularit\atssatzes, dass es eine Modulform von Gewicht vier gibt, deren Koeffizienten durch die Koeffizienten $a_p$ gegeben sind. Diese Erwartung hat sich durch unsere umfangreichen Rechnungen best\atigt. Dar\uberhinaus leiten wir weitere Formeln zur Bestimmung des Frobeniuspolynoms her, in welchen auch die nicht-holomorphen L\osungen der Gleichung $Pf=0$ in einer Umgebung der Null eine Rolle spielen.
Untersuchungen zur Funktion multipler DnaJ-Proteine in dem Cyanobakterium Synechocystis sp. PCC 6803
Resumo:
Sowohl in Synechocystis sp. PCC 6803 als auch in anderen Cyanobakterien konnten multiple DnaJ-Proteine nachgewiesen werden, deren Funktion jedoch noch weitestgehend unverstanden ist. Im Rahmen dieser Arbeit wurden die Funktionen der multiplen DnaJ-Proteine von Synechocystis sp. charakterisiert. Das DnaJ-Protein, Sll0897 gehört aufgrund seiner Domänenstruktur zu den Typ I-Proteinen, Slr0093 und Sll1933 zu den Typ II-Proteinen und Sll0909, Sll1011, Sll1384 und Sll1666 zu den Typ III DnaJ-Proteinen. Durch Komplementationsstudien des E. coli ΔdnaJ-Stammes OD259 konnte eine Komplementation des Wachstumsdefekts bei höheren Temperaturen durch die Proteine Slr0093 und Sll0897 gezeigt werden. In Synechocystis war eine komplette Disruption von sll1933 nicht möglich, weshalb das Protein Sll1933 unter normalen Wachstumsbedingungen essentiell ist. Doppelte Insertionmutationen waren lediglich bei der Kombination der Gene sll0909 und sll1384 möglich. Untersuchungen des Wachstumsverhaltens der dnaJ-Disruptions-stämme unter Hitze- und Kältestressbedingungen zeigten, dass das Protein Sll0897 eine wichtige Funktion bei der Stressantwort in Synechocystis besitzt und unter Hitzestressbedingungen essentiell ist. Eine vollständige Deletion des Gens sll0897 war Synechocystis sp. bereits unter normalen Wachstumsbedingungen nicht möglich. Bei den für ein Wachstum mindestens notwendigen Domänen des Sll0897 handelt es sich um die charakteristische J-Domäne und die Glycin-Phenylalanin-reiche Domäne. Unter Hitzestressbedingungen ist das Volllängen-Protein Sll0897 für ein Wachstum essentiell. rnNeben den in vivo Wachstumsexperimenten wurde eine Methode zur heterologen Expression der sieben DnaJ-Proteine in E. coli und einer nativen Reinigung von Slr0093, Sll0897, Sll0909 und Sll1666 etabliert. Untersuchungen zur Thermostabilität der gereinigten Proteine zeigten für das Slr0093 und Sll1666 einen reversiblen Prozess, wodurch sie auch nach dem Hitzestress noch als Faltungshelfer fungieren können. Bei den Proteinen Sll0897 und Sll0909 ist der Prozess jedoch nicht reversibel, so dass sie nach Hitzestresseinwirkung neu synthetisiert oder durch Chaperoneinwirkung korrekt gefaltet werden müssen. Die Affinitäts-„Pull-Down“ Analysen lieferten keine klaren Hinweise auf die DnaK-Interaktionspartner der Proteine Slr0093, Sll0897, Sll0909 und Sll1666, weshalb weitere Untersuchungen notwendig sind. Mit Hilfe der Gelfiltrationsanalysen konnten die errechneten molaren Massen der Proteine Slr0093 und Sll1666 bestätigt und beide Proteine in einer monomeren Form nachgewiesen werden. Die DnaJ-Proteine Sll0897 und Sll0909 konnten in zwei oligomeren Zuständen detektiert werden. Analysen der ATPase-Aktivität des DnaK2-Proteins alleine und des DnaK2-Proteins zusammen mit den DnaJ-Proteinen Slr0093, Sll0897, Sll0909 und Sll1666 zeigten eine Steigerung der ATP-Hydrolyserate bei der Interaktion von DnaK und DnaJ, wobei Sll0897 die größte Steigerung der ATPase-Aktivität des DnaK2 induzierte.
Resumo:
BACKGROUND One aspect of a multidimensional approach to understanding asthma as a complex dynamic disease is to study how lung function varies with time. Variability measures of lung function have been shown to predict response to beta(2)-agonist treatment. An investigation was conducted to determine whether mean, coefficient of variation (CV) or autocorrelation, a measure of short-term memory, of peak expiratory flow (PEF) could predict loss of asthma control following withdrawal of regular inhaled corticosteroid (ICS) treatment, using data from a previous study. METHODS 87 adult patients with mild to moderate asthma who had been taking ICS at a constant dose for at least 6 months were monitored for 2-4 weeks. ICS was then withdrawn and monitoring continued until loss of control occurred as per predefined criteria. Twice-daily PEF was recorded during monitoring. Associations between loss of control and mean, CV and autocorrelation of morning PEF within 2 weeks pre- and post-ICS withdrawal were assessed using Cox regression analysis. Predictive utility was assessed using receiver operator characteristics. RESULTS 53 out of 87 patients had sufficient PEF data over the required analysis period. The mean (389 vs 370 l/min, p<0.0001) and CV (4.5% vs 5.6%, p=0.007) but not autocorrelation of PEF changed significantly from prewithdrawal to postwithdrawal in subjects who subsequently lost control, and were unaltered in those who did not. These changes were related to time to loss of control. CV was the most consistent predictor, with similar sensitivity and sensitivity to exhaled nitric oxide. CONCLUSION A simple, easy to obtain variability measure of daily lung function such as the CV may predict loss of asthma control within the first 2 weeks of ICS withdrawal.
Resumo:
The aim of this study was to compare craniofacial morphology and soft tissue profiles in patients with complete bilateral cleft lip and palate at 9 years of age, treated in two European cleft centres with delayed hard palate closure but different treatment protocols. The cephalometric data of 83 consecutively treated patients were compared (Gothenburg, N=44; Nijmegen, N=39). In total, 18 hard tissue and 10 soft tissue landmarks were digitized by one operator. To determine the intra-observer reliability 20 cephalograms were digitized twice with a monthly interval. Paired t-test, Pearson correlation coefficients and multiple regression models were applied for statistical analysis. Hard and soft tissue data were superimposed using the Generalized Procrustes Analysis. In Nijmegen, the maxilla was protrusive for hard and soft tissue values (P=0.001, P=0.030, respectively) and the maxillary incisors were retroclined (P<0.001), influencing the nasolabial angle, which was increased in comparison with Gothenburg (P=0.004). In conclusion, both centres showed a favourable craniofacial form at 9-10 years of age, although there were significant differences in the maxillary prominence, the incisor inclination and soft tissue cephalometric values. Follow-up of these patients until facial growth has ceased, may elucidate components for outcome improvement.
Resumo:
AIM: To assess functional impairment in terms of visual acuity reduction and visual field defects in inactive ocular toxoplasmosis. METHODS: 61 patients with known ocular toxoplasmosis in a quiescent state were included in this prospective, cross-sectional study. A complete ophthalmic examination, retinal photodocumentation and standard automated perimetry (Octopus perimeter, program G2) were performed. Visual acuity was classified on the basis of the World Health Organization definition of visual impairment and blindness: normal (> or =20/25), mild (20/25 to 20/60), moderate (20/60 to 20/400) and severe (<20/400). Visual field damage was correspondingly graded as mild (mean defect <4 dB), moderate (mean defect 4-12 dB) or severe (mean defect >12 dB). RESULTS: 8 (13%) patients presented with bilateral ocular toxoplasmosis. Thus, a total of 69 eyes was evaluated. Visual field damage was encountered in 65 (94%) eyes, whereas only 28 (41%) eyes had reduced visual acuity, showing perimetric findings to be more sensitive in detecting chorioretinal damage (p<0.001). Correlation with the clinical localisation of chorioretinal scars was better for visual field (in 70% of the instances) than for visual acuity (33%). Moderate to severe functional impairment was registered in 65.2% for visual field, and in 27.5% for visual acuity. CONCLUSION: In its quiescent stage, ocular toxoplasmosis was associated with permanent visual field defects in >94% of the eyes studied. Hence, standard automated perimetry may better reflect the functional damage encountered by ocular toxoplasmosis than visual acuity.
Resumo:
Teeth are brittle and highly susceptible to cracking. We propose that observations of such cracking can be used as a diagnostic tool for predicting bite force and inferring tooth function in living and fossil mammals. Laboratory tests on model tooth structures and extracted human teeth in simulated biting identify the principal fracture modes in enamel. Examination of museum specimens reveals the presence of similar fractures in a wide range of vertebrates, suggesting that cracks extended during ingestion or mastication. The use of ‘fracture mechanics’ from materials engineering provides elegant relations for quantifying critical bite forces in terms of characteristic tooth size and enamel thickness. The role of enamel microstructure in determining how cracks initiate and propagate within the enamel (and beyond) is discussed. The picture emerges of teeth as damage-tolerant structures, full of internal weaknesses and defects and yet able to contain the expansion of seemingly precarious cracks and fissures within the enamel shell. How the findings impact on dietary pressures forms an undercurrent of the study.
Resumo:
The purpose of the study was to evaluate observer performance in the detection of pneumothorax with cesium iodide and amorphous silicon flat-panel detector radiography (CsI/a-Si FDR) presented as 1K and 3K soft-copy images. Forty patients with and 40 patients without pneumothorax diagnosed on previous and subsequent digital storage phosphor radiography (SPR, gold standard) had follow-up chest radiographs with CsI/a-Si FDR. Four observers confirmed or excluded the diagnosis of pneumothorax according to a five-point scale first on the 1K soft-copy image and then with help of 3K zoom function (1K monitor). Receiver operating characteristic (ROC) analysis was performed for each modality (1K and 3K). The area under the curve (AUC) values for each observer were 0.7815, 0.7779, 0.7946 and 0.7066 with 1K-matrix soft copies and 0.8123, 0.7997, 0.8078 and 0.7522 with 3K zoom. Overall detection of pneumothorax was better with 3K zoom. Differences between the two display methods were not statistically significant in 3 of 4 observers (p-values between 0.13 and 0.44; observer 4: p = 0.02). The detection of pneumothorax with 3K zoom is better than with 1K soft copy but not at a statistically significant level. Differences between both display methods may be subtle. Still, our results indicate that 3K zoom should be employed in clinical practice.
Resumo:
BACKGROUND AND OBJECTIVES: Data suggest that atorvastatin may be nephroprotective. This subanalysis of the Treating to New Targets study investigated how intensive lipid lowering with 80 mg of atorvastatin affects renal function when compared with 10 mg in patients with coronary heart disease. DESIGN, SETTING, PARTICIPANTS, ; MEASUREMENTS: A total of 10,001 patients with coronary heart disease and LDL cholesterol levels of <130 mg/dl were randomly assigned to double-blind therapy with 10 or 80 mg/d atorvastatin. Estimated GFR using the Modification of Diet in Renal Disease equation was compared at baseline and at the end of follow-up in 9656 participants with complete renal data. RESULTS: Mean estimated GFR at baseline was 65.6 +/- 11.4 ml/min per 1.73 m2 in the 10-mg group and 65.0 +/- 11.2 ml/min per 1.73 m2 in the 80-mg group. At the end of follow-up (median time to final creatinine measurement 59.5 months), mean change in estimated GFR showed an increase of 3.5 +/- 0.14 ml/min per 1.73 m2 with 10 mg and 5.2 +/- 0.14 ml/min per 1.73 m2 with 80 mg (P < 0.0001 for treatment difference). In the 80-mg arm, estimated GFR improved to > or = 60 ml/min per 1.73 m2 in significantly more patients and declined to < 60 ml/min per 1.73 m2 in significantly fewer patients than in the 10-mg arm. CONCLUSIONS: The expected 5-yr decline in renal function was not observed. Estimated GFR improved in both treatment groups but was significantly greater with 80 mg than with 10 mg, suggesting this benefit may be dosage related.