971 resultados para ALS data-set


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Matrix metalloproteinases (MMPs) constitute a family of zinc-dependent proteases involved in the extracellular matrix degradation. MMP-2 and MMP9 are overexpressed in several human cancer types, including melanoma, thus the development of new compounds to inhibit MMPs' activity is desirable. Molecular dynamic simulation and molecular properties calculations were performed on a set of novel beta-N-biaryl ether sulfonamide-based hydroxamates, reported as MMP-2 and MMP-9 inhibitors, for providing data to develop an exploratory analysis. Thermodynamic, electronic, and steric descriptors have significantly discriminated highly active from moderately and less active inhibitors of MMP-2 whereas apparent partition coefficient at pH 1.5 was also significant for the MMP-9 data set. Compound 47 was considered an outlier in all analysis, indicating the presence of a bulky substituent group in R3 is crucial to this set of inhibitors for the establishment of molecular interactions with the S1 subsite of both enzymes, but there is a limit. (C) 2012 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we propose nonlinear elliptical models for correlated data with heteroscedastic and/or autoregressive structures. Our aim is to extend the models proposed by Russo et al. [22] by considering a more sophisticated scale structure to deal with variations in data dispersion and/or a possible autocorrelation among measurements taken throughout the same experimental unit. Moreover, to avoid the possible influence of outlying observations or to take into account the non-normal symmetric tails of the data, we assume elliptical contours for the joint distribution of random effects and errors, which allows us to attribute different weights to the observations. We propose an iterative algorithm to obtain the maximum-likelihood estimates for the parameters and derive the local influence curvatures for some specific perturbation schemes. The motivation for this work comes from a pharmacokinetic indomethacin data set, which was analysed previously by Bocheng and Xuping [1] under normality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, 50 ceramic fragments from the Lago Grande and 30 from the Osvaldo archaeological site were compared to assess elemental similarities. The aim is to perform a preliminary comparison between the sites, which are located in the central Amazon, Brazil. The analytical technique employed to obtain the ceramics elemental composition was instrumental neutron activation analysis (INAA). The data set obtained was explored by the multivariate statistical techniques of cluster, principal component and discriminant analysis. The analyzed elements were: Na, Lu, U, Yb, La, Th, Cr, Cs, Sc, Fe, Eu, Ce and Hf. The results showed the existence of at least two compositional groups for Lago Grande and Osvaldo. Each compositional group of Osvaldo archaeological site matches with one group of Lago Grande. Correlated with the archaeological background, the results suggest commercial or cultural exchange in the region, which is an indicative of socio-cultural interactions between those sites.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A procedure has been proposed by Ciotti and Bricaud (2006) to retrieve spectral absorption coefficients of phytoplankton and colored detrital matter (CDM) from satellite radiance measurements. This was also the first procedure to estimate a size factor for phytoplankton, based on the shape of the retrieved algal absorption spectrum, and the spectral slope of CDM absorption. Applying this method to the global ocean color data set acquired by SeaWiFS over twelve years (1998-2009), allowed for a comparison of the spatial variations of chlorophyll concentration ([Chl]), algal size factor (S-f), CDM absorption coefficient (a(cdm)) at 443 nm, and spectral slope of CDM absorption (S-cdm). As expected, correlations between the derived parameters were characterized by a large scatter at the global scale. We compared temporal variability of the spatially averaged parameters over the twelve-year period for three oceanic areas of biogeochemical importance: the Eastern Equatorial Pacific, the North Atlantic and the Mediterranean Sea. In all areas, both S-f and a(cdm)(443) showed large seasonal and interannual variations, generally correlated to those of algal biomass. The CDM maxima appeared in some occasions to last longer than those of [Chl]. The spectral slope of CDM absorption showed very large seasonal cycles consistent with photobleaching, challenging the assumption of a constant slope commonly used in bio-optical models. In the Equatorial Pacific, the seasonal cycles of [Chl], S-f, a(cdm)(443) and S-cdm, as well as the relationships between these parameters, were strongly affected by the 1997-98 El Ni o/La Ni a event.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A common interest in gene expression data analysis is to identify from a large pool of candidate genes the genes that present significant changes in expression levels between a treatment and a control biological condition. Usually, it is done using a statistic value and a cutoff value that are used to separate the genes differentially and nondifferentially expressed. In this paper, we propose a Bayesian approach to identify genes differentially expressed calculating sequentially credibility intervals from predictive densities which are constructed using the sampled mean treatment effect from all genes in study excluding the treatment effect of genes previously identified with statistical evidence for difference. We compare our Bayesian approach with the standard ones based on the use of the t-test and modified t-tests via a simulation study, using small sample sizes which are common in gene expression data analysis. Results obtained report evidence that the proposed approach performs better than standard ones, especially for cases with mean differences and increases in treatment variance in relation to control variance. We also apply the methodologies to a well-known publicly available data set on Escherichia coli bacterium.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN] The information provided by the International Commission for the Conservation of Atlantic Tunas (ICCAT) on captures of skipjack tuna (Katsuwonus pelamis) in the central-east Atlantic has a number of limitations, such as gaps in the statistics for certain fleets and the level of spatiotemporal detail at which catches are reported. As a result, the quality of these data and their effectiveness for providing management advice is limited. In order to reconstruct missing spatiotemporal data of catches, the present study uses Data INterpolating Empirical Orthogonal Functions (DINEOF), a technique for missing data reconstruction, applied here for the first time to fisheries data. DINEOF is based on an Empirical Orthogonal Functions decomposition performed with a Lanczos method. DINEOF was tested with different amounts of missing data, intentionally removing values from 3.4% to 95.2% of data loss, and then compared with the same data set with no missing data. These validation analyses show that DINEOF is a reliable methodological approach of data reconstruction for the purposes of fishery management advice, even when the amount of missing data is very high.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

By the end of the 19th century, geodesy has contributed greatly to the knowledge of regional tectonics and fault movement through its ability to measure, at sub-centimetre precision, the relative positions of points on the Earth’s surface. Nowadays the systematic analysis of geodetic measurements in active deformation regions represents therefore one of the most important tool in the study of crustal deformation over different temporal scales [e.g., Dixon, 1991]. This dissertation focuses on motion that can be observed geodetically with classical terrestrial position measurements, particularly triangulation and leveling observations. The work is divided into two sections: an overview of the principal methods for estimating longterm accumulation of elastic strain from terrestrial observations, and an overview of the principal methods for rigorously inverting surface coseismic deformation fields for source geometry with tests on synthetic deformation data sets and applications in two different tectonically active regions of the Italian peninsula. For the long-term accumulation of elastic strain analysis, triangulation data were available from a geodetic network across the Messina Straits area (southern Italy) for the period 1971 – 2004. From resulting angle changes, the shear strain rates as well as the orientation of the principal axes of the strain rate tensor were estimated. The computed average annual shear strain rates for the time period between 1971 and 2004 are γ˙1 = 113.89 ± 54.96 nanostrain/yr and γ˙2 = -23.38 ± 48.71 nanostrain/yr, with the orientation of the most extensional strain (θ) at N140.80° ± 19.55°E. These results suggests that the first-order strain field of the area is dominated by extension in the direction perpendicular to the trend of the Straits, sustaining the hypothesis that the Messina Straits could represents an area of active concentrated deformation. The orientation of θ agree well with GPS deformation estimates, calculated over shorter time interval, and is consistent with previous preliminary GPS estimates [D’Agostino and Selvaggi, 2004; Serpelloni et al., 2005] and is also similar to the direction of the 1908 (MW 7.1) earthquake slip vector [e.g., Boschi et al., 1989; Valensise and Pantosti, 1992; Pino et al., 2000; Amoruso et al., 2002]. Thus, the measured strain rate can be attributed to an active extension across the Messina Straits, corresponding to a relative extension rate ranges between < 1mm/yr and up to ~ 2 mm/yr, within the portion of the Straits covered by the triangulation network. These results are consistent with the hypothesis that the Messina Straits is an important active geological boundary between the Sicilian and the Calabrian domains and support previous preliminary GPS-based estimates of strain rates across the Straits, which show that the active deformation is distributed along a greater area. Finally, the preliminary dislocation modelling has shown that, although the current geodetic measurements do not resolve the geometry of the dislocation models, they solve well the rate of interseismic strain accumulation across the Messina Straits and give useful information about the locking the depth of the shear zone. Geodetic data, triangulation and leveling measurements of the 1976 Friuli (NE Italy) earthquake, were available for the inversion of coseismic source parameters. From observed angle and elevation changes, the source parameters of the seismic sequence were estimated in a join inversion using an algorithm called “simulated annealing”. The computed optimal uniform–slip elastic dislocation model consists of a 30° north-dipping shallow (depth 1.30 ± 0.75 km) fault plane with azimuth of 273° and accommodating reverse dextral slip of about 1.8 m. The hypocentral location and inferred fault plane of the main event are then consistent with the activation of Periadriatic overthrusts or other related thrust faults as the Gemona- Kobarid thrust. Then, the geodetic data set exclude the source solution of Aoudia et al. [2000], Peruzza et al. [2002] and Poli et al. [2002] that considers the Susans-Tricesimo thrust as the May 6 event. The best-fit source model is then more consistent with the solution of Pondrelli et al. [2001], which proposed the activation of other thrusts located more to the North of the Susans-Tricesimo thrust, probably on Periadriatic related thrust faults. The main characteristics of the leveling and triangulation data are then fit by the optimal single fault model, that is, these results are consistent with a first-order rupture process characterized by a progressive rupture of a single fault system. A single uniform-slip fault model seems to not reproduce some minor complexities of the observations, and some residual signals that are not modelled by the optimal single-fault plane solution, were observed. In fact, the single fault plane model does not reproduce some minor features of the leveling deformation field along the route 36 south of the main uplift peak, that is, a second fault seems to be necessary to reproduce these residual signals. By assuming movements along some mapped thrust located southward of the inferred optimal single-plane solution, the residual signal has been successfully modelled. In summary, the inversion results presented in this Thesis, are consistent with the activation of some Periadriatic related thrust for the main events of the sequence, and with a minor importance of the southward thrust systems of the middle Tagliamento plain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

- ZUSAMMENFASSUNG:Die vorliegende Dissertation befasst sich mit der Bestimmung der chemischen und physikalischen Eigenschaften von Aerosolpartikeln im Amazonasbecken, die während Zeiten mit Biomasseverbrennung und bei Hintergrundbedingungen bestimmt wurden. Die Messungen wurden während zwei Kampagnen im Rahmen des europäischen Beitrags zum LBA-EUSTACH Experiment in Amazonien. Die Daten umfassen Messungen der Anzahlkonzentrationen, Größenverteilungen, optischen Eigenschaften sowie Elementzusammensetzungen und Kohlenstoffgehalte der gesammelten Aerosole. Die Zusammensetzung des Aerosols wies auf folgende drei Quellen hin: natürlichen biogenen, Mineralstaub, und pyrogenes Aerosol. Aller drei Komponenten trugen signifikant zur Extinktion des Sonnenlichts bei. Insgesamt ergab sich eine Steigerung der Meßwerte um ca. das Zehnfache während der Trockenzeit im Vergleich zur Regenzeit, was auf eine massive Einbringung von Rauchpartikeln im Submikrometerbereich in die Atmosphäre während der Trockenzeit zurückzuführen ist. Dementsprechend sank die Einzelstreualbedo von ca. 0,97 auf 0,91. Der Brechungsindex der Aerosolpartikel wurde mit einer neuen iterative Methoden, basierend auf der Mie-Theorie berechnet. Es ergaben sich durchschnittliche Werte von 1,42 – 0,006i für die Regenzeit und 1,41 – 0,013i für die Trockenperiode. Weitere klimatisch relevante Parameterergaben für Hintergrundaerosole und für Aerosole aus Biomasseverbrennung folgende Werte: Asymmetrieparameter von 0,63 ± 0,02 bzw. 0,70 ± 0,03 und Rückstreuungsverhältnisse von 0,12 ± 0,01 bzw. 0,08 ± 0,01. Diese Veränderungen haben das Potential, das regionale und globale Klima über die Variierung der Extinktion der Sonneneinstrahlung als auch der Wolkeneigenschaften zu beeinflussen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACTDie vorliegende Arbeit befasste sich mit der Reinigung,heterologen Expression, Charakterisierung, molekularenAnalyse, Mutation und Kristallisation des EnzymsVinorin-Synthase. Das Enzym spielt eine wichtige Rolle inder Ajmalin-Biosynthese, da es in einerAcetyl-CoA-abhängigen Reaktion die Umwandlung desSarpagan-Alkaloids 16-epi-Vellosimin zu Vinorin unterBildung des Ajmalan-Grundgerüstes katalysiert. Nach der Reinigung der Vinorin-Synthase ausHybrid-Zellkulturen von Rauvolfia serpentina/Rhazya strictamit den fünf chromatographischen TrennmethodenAnionenaustauschchromatographie an SOURCE 30Q, HydrophobeInteraktionen Chromatographie an SOURCE 15PHE,Chromatographie an MacroPrep Ceramic Hydroxyapatit,Anionenaustauschchromatographie an Mono Q undGrößenausschlußchromatographie an Superdex 75 konnte dieVinorin-Synthase aus 2 kg Zellkulturgewebe 991fachangereichert werden.Das nach der Reinigung angefertigte SDS-Gel ermöglichte eineklare Zuordnung der Protein-Bande als Vinorin-Synthase.Der Verdau der Enzymbande mit der Endoproteinase LysC unddie darauffolgende Sequenzierung der Spaltpeptide führte zuvier Peptidsequenzen. Der Datenbankvergleich (SwissProt)zeigte keinerlei Homologien zu Sequenzen bekannterPflanzenenzyme. Mit degenerierten Primern, abgeleitet voneinem der erhaltenen Peptidfragmente und einer konserviertenRegion bekannter Acetyltransferasen gelang es, ein erstescDNA-Fragment der Vinorin-Synthase zu amplifizieren. Mit derMethode der RACE-PCR wurde die Nukleoidsequenzvervollständigt, was zu einem cDNA-Vollängenklon mit einerGröße von 1263 bp führte, der für ein Protein mit 421Aminosäuren (46 kDa) codiert.Das Vinorin-Synthase-Gen wurde in den pQE2-Expressionsvektorligiert, der für einen N-terminalen 6-fachen His-tagcodiert. Anschließend wurde sie erstmals erfolgreich in E.coli im mg-Maßstab exprimiert und bis zur Homogenitätgereinigt. Durch die erfolgreiche Überexpression konnte dieVinorin-Synthase eingehend charakterisiert werden. DerKM-Wert für das Substrat Gardneral wurde mit 20 µM, bzw.41.2 µM bestimmt und Vmax betrug 1 pkat, bzw. 1.71 pkat.Nach erfolgreicher Abspaltung des His-tags wurden diekinetischen Parameter erneut bestimmt (KM- Wert 7.5 µM, bzw.27.52 µM, Vmax 0.7 pkat, bzw. 1.21 pkat). Das Co-Substratzeigt einen KM- Wert von 60.5 µM (Vmax 0.6 pkat). DieVinorin-Synthase besitzt ein Temperatur-Optimum von 35 °Cund ein pH-Optimum bei 7.8.Homologievergleiche mit anderen Enzymen zeigten, dass dieVinorin-Synthase zu einer noch kleinen Familie von bisher 10Acetyltransferasen gehört. Alle Enzyme der Familie haben einHxxxD und ein DFGWG-Motiv zu 100 % konserviert. Basierendauf diesen Homologievergleichen und Inhibitorstudien wurden11 in dieser Proteinfamilie konservierte Aminosäuren gegenAlanin ausgetauscht, um so die Aminosäuren einer in derLiteratur postulierten katalytischen Triade(Ser/Cys-His-Asp) zu identifizieren.Die Mutation aller vorhandenen konservierten Serine undCysteine resultierte in keiner Mutante, die zumvollständigen Aktivitätsverlust des Enzyms führte. Nur dieMutationen H160A und D164A resultierten in einemvollständigen Aktivitätsverlust des Enzyms. Dieses Ergebniswiderlegt die Theorie einer katalytischen Triade und zeigte,dass die Aminosäuren H160A und D164A exklusiv an derkatalytischen Reaktion beteiligt sind.Zur Überprüfung dieser Ergebnisse und zur vollständigenAufklärung des Reaktionsmechanismus wurde dieVinorin-Synthase kristallisiert. Die bis jetzt erhaltenenKristalle (Kristallgröße in µm x: 150, y: 200, z: 200)gehören der Raumgruppe P212121 (orthorhombisch primitiv) anund beugen bis 3.3 Å. Da es bis jetzt keine Kristallstruktureines zur Vinorin-Synthase homologen Proteins gibt, konntedie Struktur noch nicht vollständig aufgeklärt werden. ZurLösung des Phasenproblems wird mit der Methode der multiplenanomalen Dispersion (MAD) jetzt versucht, die ersteKristallstruktur in dieser Enzymfamilie aufzuklären.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Das Standardmodell der Elementarteilchenphysik istexperimentell hervorragend bestätigt, hat auf theoretischerSeite jedoch unbefriedigende Aspekte: Zum einen wird derHiggssektor der Theorie von Hand eingefügt, und zum anderenunterscheiden sich die Beschreibung des beobachtetenTeilchenspektrums und der Gravitationfundamental. Diese beiden Nachteile verschwinden, wenn mandas Standardmodell in der Sprache der NichtkommutativenGeometrie formuliert. Ziel hierbei ist es, die Raumzeit der physikalischen Theoriedurch algebraische Daten zu erfassen. Beispielsweise stecktdie volle Information über eine RiemannscheSpinmannigfaltigkeit M in dem Datensatz (A,H,D), den manspektrales Tripel nennt. A ist hierbei die kommutativeAlgebra der differenzierbaren Funktionen auf M, H ist derHilbertraum der quadratintegrablen Spinoren über M und D istder Diracoperator. Mit Hilfe eines solchen Tripels (zu einer nichtkommutativenAlgebra) lassen sich nun sowohl Gravitation als auch dasStandardmodell mit mathematisch ein und demselben Mittelerfassen. In der vorliegenden Arbeit werden nulldimensionale spektraleTripel (die diskreten Raumzeiten entsprechen) zunächstklassifiziert und in Beispielen wird eine Quantisierungsolcher Objekte durchgeführt. Ein Problem der spektralenTripel stellt ihre Beschränkung auf echt RiemannscheMetriken dar. Zu diesem Problem werden Lösungsansätzepräsentiert. Im abschließenden Kapitel der Arbeit wird dersogenannte 'Feynman-Beweis der Maxwellgleichungen' aufnichtkommutative Konfigurationsräume verallgemeinert.