923 resultados para testing, test, php
Resumo:
This in vitro study compared different ultrasonic vibration modes for intraradicular cast post removal. The crowns of 24 maxillary canines were removed, the roots were embedded in acrylic resin blocks, and the canals were treated endodontically. The post holes were prepared and root canal impressions were taken with self-cured resin acrylic. After casting, the posts were cemented with zinc phosphate cement. The samples were randomly distributed into 3 groups (n=8): G1: no ultrasonic vibration (control); G2: tip of the ultrasonic device positioned perpendicularly to core surface and close to the incisal edge; and G3: tip of the ultrasonic device positioned perpendicularly to core surface at cervical region, close to the line of cementation. An Enac OE-5 ultrasound unit with an ST-09 tip was used. All samples were submitted to the tensile test using an universal testing machine at a crosshead speed of 1 mm/min. Data were subjected to one-way ANOVA and Tukey's post-hoc tests (α=0.05). Mean values of the load to dislodge the posts (MPa) were: G1 = 4.6 (± 1.4) A; G2 = 2.8 (± 0.9) B, and G3= 0.9 (± 0.3) C. Therefore, the ultrasonic vibration applied with the tip of device close to the core's cervical area showed higher ability to reduce the retention of cast post to root canal.
Resumo:
The aim of this study was to evaluate the influence of microstructure and composition of basic alloys on their microshear bond strength (µSBS) to resin luting cement. The alloys used were: Supreme Cast-V (SC), Tilite Star (TS), Wiron 99 (W9), VeraBond II (VBII), VeraBond (VB), Remanium (RM) and IPS d.SIGN 30 (IPS). Five wax patterns (13mm in diameter and 4mm height) were invested, and cast in a centrifugal casting machine for each basic alloy. The specimens were embedded in resin, polished with a SiC paper and sandblasted. After cleaning the metal surfaces, six tygon tubes (0.5 mm height and 0.75 mm in diameter) were placed on each alloy surface, the resin cement (Panavia F) was inserted, and the excess was removed before light-curing. After storage (24 h/37°C), the specimens were subjected to µSBS testing (0.5 mm/min). The data were subjected to a one-way repeated measures analysis of variance and Turkey's test (α=0.05). After polishing, their microstructures were revealed with specific conditioners. The highest µSBS (mean/standard deviation in MPa) were observed in the alloys with dendritic structure, eutectic formation or precipitation: VB (30.6/1.7), TS (29.8/0.9), SC (30.6/1.7), with the exception of IPS (31.1/0.9) which showed high µSBS but no eutectic formation. The W9 (28.1/1.5), VBII (25.9/2.0) and RM (25.9/0.9) showed the lowest µSBS and no eutectic formation. It seems that alloys with eutectic formation provide the highest µSBS values when bonded to a light-cured resin luting cement.
Resumo:
This study evaluated the effectiveness of different sealants applied to a nanofiller composite resin. Forty specimens of Filtek Z-350 were obtained after inserting the material in a 6x3 mm stainless steel mold followed by light activation for 20 s. The groups were divided (n=10) according to the surface treatment applied: Control group (no surface treatment), Fortify, Fortify Plus and Biscover LV. The specimens were subjected to simulated toothbrushing using a 200 g load and 250 strokes/min to simulate 1 week, 1, 3 and 6 months and 1 and 3 years in the mouth, considering 10,000 cycles equivalent to 1 year of toothbrushing. Oral-B soft-bristle-tip toothbrush heads and Colgate Total dentifrice at a 1:2 water-dilution were used. After each simulated time, surface roughness was assessed in random triplicate readings. The data were submitted to two-way ANOVA and Tukey's test at a 95% confidence level. The specimens were observed under scanning electron microscopy (SEM) after each toothbrushing cycle. The control group was not significantly different (p>0.05) from the other groups, except for Fortify Plus (p<0.05), which was rougher. No significant differences (p>0.05) were observed at the 1-month assessment between the experimental and control groups. Fortify and Fortify Plus presented a rougher surface over time, differing from the baseline (p<0.05). Biscover LV did not differ (p>0.05) from the baseline at any time. None of the experimental groups showed a significantly better performance (p>0.05) than the control group at any time. SEM confirmed the differences found during the roughness testing. Surface penetrating sealants did not improve the roughness of nanofiller composite resin.
Resumo:
OBJECTIVE: The frequent occurrence of inconclusive serology in blood banks and the absence of a gold standard test for Chagas'disease led us to examine the efficacy of the blood culture test and five commercial tests (ELISA, IIF, HAI, c-ELISA, rec-ELISA) used in screening blood donors for Chagas disease, as well as to investigate the prevalence of Trypanosoma cruzi infection among donors with inconclusive serology screening in respect to some epidemiological variables. METHODS: To obtain estimates of interest we considered a Bayesian latent class model with inclusion of covariates from the logit link. RESULTS: A better performance was observed with some categories of epidemiological variables. In addition, all pairs of tests (excluding the blood culture test) presented as good alternatives for both screening (sensitivity > 99.96% in parallel testing) and for confirmation (specificity > 99.93% in serial testing) of Chagas disease. The prevalence of 13.30% observed in the stratum of donors with inconclusive serology, means that probably most of these are non-reactive serology. In addition, depending on the level of specific epidemiological variables, the absence of infection can be predicted with a probability of 100% in this group from the pairs of tests using parallel testing. CONCLUSION: The epidemiological variables can lead to improved test results and thus assist in the clarification of inconclusive serology screening results. Moreover, all combinations of pairs using the five commercial tests are good alternatives to confirm results.
Resumo:
OBJECTIVE: Enterobacteriaceae bacteria harboring Klebsiella pneumoniae carbapenemase are a serious worldwide threat. The molecular identification of these pathogens is not routine in Brazilian hospitals, and a rapid phenotypic screening test is desirable. This study aims to evaluate the modified Hodge test as a phenotypic screening test for Klebsiella pneumoniae carbapenemase. METHOD: From April 2009 to July 2011, all Enterobacteriaceae bacteria that were not susceptible to ertapenem according to Vitek2 analysis were analyzed with the modified Hodge test. All positive isolates and a random subset of negative isolates were also assayed for the presence of blaKPC. Isolates that were positive in modified Hodge tests were sub-classified as true-positives (E. coli touched the ertapenem disk) or inconclusive (distortion of the inhibition zone of E. coli, but growth did not reach the ertapenem disk). Negative results were defined as samples with no distortion of the inhibition zone around the ertapenem disk. RESULTS: Among the 1521 isolates of Enterobacteriaceae bacteria that were not susceptible to ertapenem, 30% were positive for blaKPC, and 35% were positive according to the modified Hodge test (81% specificity). Under the proposed sub-classification, true positives showed a 98% agreement with the blaKPC results. The negative predictive value of the modified Hodge test for detection was 100%. KPC producers showed high antimicrobial resistance rates, but 90% and 77% of these isolates were susceptible to aminoglycoside and tigecycline, respectively. CONCLUSION: Standardizing the modified Hodge test interpretation may improve the specificity of KPC detection. In this study, negative test results ruled out 100% of the isolates harboring Klebsiella pneumoniae carbapenemase 2. The test may therefore be regarded as a good epidemiological tool.
Resumo:
This paper is part of an extensive work about the technological development, experimental analysis and numerical modeling of steel fibre reinforced concrete pipes. The first part ("Steel fibre reinforced concrete pipes. Part 1: technological analysis of the mechanical behavior") dealt with the technological development of the experimental campaign, the test procedure and the discussion of the structural behavior obtained for each of the dosages of fibre used. This second part deals with the aspects of numerical modeling. In this respect, a numerical model called MAP, which simulates the behavior of fibre reinforced concrete pipes with medium-low range diameters, is introduced. The bases of the numerical model are also mentioned. Subsequently, the experimental results are contrasted with those produced by the numerical model, obtaining excellent correlations. It was possible to conclude that the numerical model is a useful tool for the design of this type of pipes, which represents an important step forward to establish the structural fibres as reinforcement for concrete pipes. Finally, the design for the optimal amount of fibres for a pipe with a diameter of 400 mm is presented as an illustrating example with strategic interest.
Resumo:
Despite favourable gravitational instability and ridge-push, elastic and frictional forces prevent subduction initiation fromarising spontaneously at passive margins. Here,we argue that forces arising fromlarge continental topographic gradients are required to initiate subduction at passivemargins. In order to test this hypothesis,we use 2Dnumerical models to assess the influence of the Andean Plateau on stressmagnitudes and deformation patterns at the Brazilian passive margin. The numerical results indicate that “plateau-push” in this region is a necessary additional force to initiate subduction. As the SE Brazilianmargin currently shows no signs of self-sustained subduction, we examined geological and geophysical data to determine if themargin is in the preliminary stages of subduction initiation. The compiled data indicate that the margin is presently undergoing tectonic inversion, which we infer as part of the continental–oceanic overthrusting stage of subduction initiation. We refer to this early subduction stage as the “Brazilian Stage”, which is characterized by N10 kmdeep reverse fault seismicity at themargin, recent topographic uplift on the continental side, thick continental crust at themargin, and bulging on the oceanic side due to loading by the overthrusting continent. The combined results of the numerical simulations and passivemargin analysis indicate that the SE Brazilian margin is a prototype candidate for subduction initiation.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
The work undertaken in this PhD thesis is aimed at the development and testing of an innovative methodology for the assessment of the vulnerability of coastal areas to marine catastrophic inundation (tsunami). Different approaches are used at different spatial scales and are applied to three different study areas: 1. The entire western coast of Thailand 2. Two selected coastal suburbs of Sydney – Australia 3. The Aeolian Islands, in the South Tyrrhenian Sea – Italy I have discussed each of these cases study in at least one scientific paper: one paper about the Thailand case study (Dall’Osso et al., in review-b), three papers about the Sydney applications (Dall’Osso et al., 2009a; Dall’Osso et al., 2009b; Dall’Osso and Dominey-Howes, in review) and one last paper about the work at the Aeolian Islands (Dall’Osso et al., in review-a). These publications represent the core of the present PhD thesis. The main topics dealt with are outlined and discussed in a general introduction while the overall conclusions are outlined in the last section.
Resumo:
ABSTRACT This works aim was to test whether LTP-like features can also be measured in cell culture and by methods that allow to analyse a alrger number of cells. A suitable method for this purpose is calcium imaging. The rationale for this approach lies in the fact that LTP/LTD are dependent on changes in intracellular calcium concentrations. Calcium levels have been measured using the calcium sensitive dye fura-2, whose fluorescence spectrum changes upon formation of the [fura-2-Ca2+] complex. Our LTP-inducing protocol comprised of two glutamate stimuli of identical size and duration (50 mM, 30 s) which were separated by 35 min. We could demonstrate that such a stimulation pattern gives rise to approx. 25% larger calcium influx at the second stimulus. It has been shown than such a stimulation pattern gives rise to an average of 25% augmentation (potentiation) of the second response, with 69% of potentiated cells. This experimental paradigm shows the pharmacological properties of LTP, established by previous electrophysiological studies:- blocking of NMDARs and mGluRs eliminates LTP induction;- blocking of AMPARs and L-type VGCCs does not eliminate LTP induction. Having obtained a system for induction and following of LTP-like changes, a preliminary application example was performed. Its purpose was to investigate possible influence of nicotine and galanthamine on our potentiation effect. Nicotine (100 mM) was shown both to increase and to eliminate glutamate-induced potentiation. Galanthamine coapplication (0.5 mM) with nicotine and glutamate exerted no effect on nicotinic modulation. However, galanthamine coapplied with glutamate alone seems to augment glutamate-induced potentiation. An LTP model system presented here could be additionally refined, by variation of glutamate application times, and testing for dependence on various forms of protein kinases. Galanthamine effect would probably be better addressed by cell-to-cell measurements instead of statistical approach, with subsequent identification of the cell type. Alternatively, combined calcium imaging â electrophysiological experiments could be performed. Spatial and temporal properties of intracellular ion dynamics could be utilised as diagnostic tools of the physiological state of the cells, thereby finding its application in functional proteomics.
Resumo:
In der vorliegenden Arbeit werden Entwicklung und Test einesneuartigen Interferometers mit zwei örtlich separierten,phasenkorrelierten Röntgenquellen zur Messung des Realteilsdes komplexen Brechungsindex von dünnen, freitragendenFolien beschrieben. Die Röntgenquellen sind zwei Folien, indenen relativistische Elektronen der Energie 855 MeVÜbergangsstrahlung erzeugen. Das am Mainzer Mikrotron MAMIrealisierte Interferometer besteht aus einer Berylliumfolieeiner Dicke von 10 Mikrometer und einer Nickel-Probefolieeiner Dicke von 2.1 Mikrometer. Die räumlichenInterferenzstrukturen werden als Funktion desFolienabstandes in einer ortsauflösenden pn-CCD nach derFourier-Analyse des Strahlungsimpulses mittels einesSilizium-Einkristallspektrometers gemessen. Die Phase derIntensitätsoszillationen enthält Informationen über dieDispersion, die die in der strahlaufwärtigen Folie erzeugteWelle in der strahlabwärtigen Probefolie erfährt. AlsFallstudie wurde die Dispersion von Nickel im Bereich um dieK-Absorptionskane bei 8333 eV, sowie bei Photonenenergien um9930 eV gemessen. Bei beiden Energien wurden deutlicheInterferenzstrukturen nachgewiesen, wobei die Kohärenz wegenWinkelmischungen mit steigendem Folienabstand bzw.Beobachtungswinkel abnimmt. Es wurden Anpassungen vonSimulationsrechnungen an die Messdaten durchgeführt, die diekohärenzvermindernden Effekte berücksichtigen. Aus diesenAnpassungen konnte bei beiden untersuchten Energien dieDispersion der Nickelprobe mit einer relativen Genauigkeitvon kleiner gleich 1.5 % in guter Übereinstimmung mit derLiteratur bestimmt werden.
Resumo:
Etablierung von Expressionsystemen für Gene der Indolalkaloid-Biosynthese unter besonderer Berücksichtigung von Cytochrom P450-Enzymen In der vorliegenden Arbeit wurden Enzyme aus der Arzneipflanze Rauvolfia serpentina bearbeitet. Es wurde versucht, das an der Biosynthese des Alkaloids Ajmalin beteiligte Cytochrom P450-Enzym Vinorin-Hydroxylase heterolog und funktionell zu exprimieren. Ein zunächst unvollständiger, unbekannter Cytochrom P450-Klon konnte komplettiert und eindeutig mittels heterologer Expression in sf9-Insektenzellen als Cinnamoyl-Hydroxylase identifiziert werden. Die Tauglichkeit des Insektenzellsystems für die Untersuchung der Vinorin-Hydroxylase ist auf Grund der deacetylierenden Wirkung der Insektenzellen auf das Substrat Vinorin nicht gegeben. Im Rahmen des Homology Cloning Projektes konnten mehrere Volllängenklone und diverse Teilsequenzen von neuen Cytochrom P450-Klonen ermittelt werden. Ausserdem wurde durch das unspezifische Binden eines degenerierten Primers ein zusätzlicher Klon gefunden, der der Gruppe der löslichen Reduktasen zugeordnet werden konnte. Diese putative Reduktase wurde auf die Aktivität von mehreren Schlüsselenzymen der Ajmalin-Biosynthese durch heterologe Expression in E.coli und anschliessende HPLC-gestützte Aktivitätstests ohne Erfolg geprüft. Bedingt durch die Untauglichkeit des Insektenzellsystems für die Identifizierung der Vinorin-Hydroxylase, wurde ein neuartiges Modul-gestütztes, pflanzliches Expressionsystem etabliert, um vorhandene P450-Volllängenklone auf Vinorin- Hydroxylaseaktivität testen zu können. Die Funktionalität des Systems konnte durch die heterologe Expression der Polyneuridinaldehyd Esterase bestätigt werden. Trotzdem war es bis jetzt nicht möglich, die Cinnamoyl-Hydroxylase als Kontrollenzym für das pflanzliche System oder aber die gesuchte Vinorin- Hydroxylase in aktiver Form zu exprimieren.