934 resultados para Real Root Isolation Methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (â^ž), NLR (0.017), and Ef (99%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During recent years a consistent number of central nervous system (CNS) drugs have been approved and introduced on the market for the treatment of many psychiatric and neurological disorders, including psychosis, depression, Parkinson disease and epilepsy. Despite the great advancements obtained in the treatment of CNS diseases/disorders, partial response to therapy or treatment failure are frequent, at least in part due to poor compliance, but also genetic variability in the metabolism of psychotropic agents or polypharmacy, which may lead to sub-therapeutic or toxic plasma levels of the drugs, and finally inefficacy of the treatment or adverse/toxic effects. With the aim of improving the treatment, reducing toxic/side effects and patient hospitalisation, Therapeutic Drug Monitoring (TDM) is certainly useful, allowing for a personalisation of the therapy. Reliable analytical methods are required to determine the plasma levels of psychotropic drugs, which are often present at low concentrations (tens or hundreds of nanograms per millilitre). The present PhD Thesis has focused on the development of analytical methods for the determination of CNS drugs in biological fluids, including antidepressants (sertraline and duloxetine), antipsychotics (aripiprazole), antiepileptics (vigabatrin and topiramate) and antiparkinsons (pramipexole). Innovative methods based on liquid chromatography or capillary electrophoresis coupled to diode-array or laser-induced fluorescence detectors have been developed, together with the suitable sample pre-treatment for interference removal and fluorescent labelling in case of LIF detection. All methods have been validated according to official guidelines and applied to the analysis of real samples obtained from patients, resulting suitable for the TDM of psychotropic drugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ziel dieser Arbeit war es, die funktionelle Bedeutung des Drosophila melanogaster tumor suppressor Gens lethal(2)tumorous imaginal discs (l(2)tid) durch die Identifikation von molekularen Partnern der vom Gen kodierten Proteine zu etablieren. Mit dem Screening einer Expressionsbibliothek mittels des Hefe-Di-Hybrid-Systems wurde das Protein Patched (Ptc) als ein neues Tid-bindendes Protein identifiziert. Ptc ist ein Zentralregulator der Hedhehog-Signalkette. Diese ist in der Entwicklung konserviert und in manchen humanen Krebsarten verwickelt. Die Tid/Ptc-Interaktion wurde mittels unabhängigen biochemischen Methoden wie dem GST-pulldown-Test oder der Immunopräzipitation überprüft. Außerdem ergaben funktionelle Studien in tumorosen Imaginalscheiben einen möglichen inhibitorischen Effekt von Tid über die Hh Signaltransduktion.Im letzten Teil dieser Arbeit wurde die Interaktion zwischen Tid und dem E-APC-Protein (Adenomatous polyposis coli) bewiesen. Polakis und seine Gruppe zeigten durch Studien mit dem Hefe-Di-Hybrid-System und in vitro, dass das hTid mit dem APC-Protein interagiert. Um dies auch auf Drosophila-Ebene zu überprüfen, wurden Immunopräzipitation-Studien mit den Drosophila-Gegenstücken durchgeführt. Diese Studien zeigen zum ersten Mal eine direkte Interaktion beider Proteine in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Während in den letzten Jahren zahlreiche Biosensoren zum spezifischen Nachweis von DNA entwickelt wurden, ist die Anwendung oberflächen-sensitiver Methoden auf enzymatische Reaktionen ein vergleichsweise neues Forschungsgebiet. Trotz der hohen Empfindlichkeit und der Möglichkeit zur Echtzeit-Beobachtung molekularer Prozesse, ist die Anwendung dieser Methoden nicht etabliert, da die Enzymaktivität durch die Nähe zur Oberfläche beeinträchtigt sein kann. Im Rahmen dieser Arbeit wurde die enzymatische Verlängerung immobilisierter DNA durch eine DNA Polymerase mit Hilfe von Oberflächenplasmonen-Fluoreszenzspektroskopie (SPFS) und einer Quarzkristall-Mikrowaage (QCM) untersucht. Die Synthese von DNA wurde im Fall der QCM als Massenzuwachs detektiert, der sich im Abfall der Resonanzfrequenz des Schwingquarzes und einem Anstieg seiner Dissipationsenergie ausdrückte. Die viskoelastischen Eigenschaften der DNA-Schichten wurden bestimmt, indem die erhaltenen Daten mit einem auf Voigt basierenden Modell ausgewertet wurden. SPFS nutzt das evaneszente elektromagnetische Feld, das mit Oberflächenplasmonen einhergeht, zur oberflächen-sensitiven Anregung von Chromophoren. Auf diese Weise wurde der Einbau von Farbstoff-markierten Nukleotiden in die entstehende DNA-Sequenz als Indikator für das Voranschreiten der Reaktion ausgenutzt. Beide Meßtechniken konnten erfolgreich zum Nachweis der DNA-Synthese herangezogen werden, wobei die katalytische Aktivität des Enzyms vergleichbar zu der in Lösung gemessenen war.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1) Background: The most common methods to evaluate clarithromycin resistance is the E-Test, but is time consuming. Resistance of Hp to clarithromycin is due to point mutations in the 23S rRNA. Eight different point mutations have been related to CH resistance, but the large majority of the clarithromycin resistance depends on three point mutations (A2142C, A2142G and A2143G). A novel PCR-based clarithromycin resistance assays, even on paraffin-embedded biopsy specimens, have been proposed. Aims: to assess clarithromycin resistance detecting these point mutation (E-Test as a reference method);secondly, to investigate relation with MIC values. Methods: Paraffin-embedded biopsies of patients Hp-positive were retrieved. The A2142C, A2142G and A2143G point mutations were detected by molecular analysis after DNA extraction by using a TaqMan real-time PCR. Results: The study enrolled 86 patients: 46 resistant and 40 sensible to CH. The Hp status was evaluated at endoscopy, by rapid urease test (RUT), histology and hp culture. According to real-time PCR, 37 specimens were susceptible to clarithromycin (wild type dna) whilst the remaining 49 specimens (57%) were resistant. A2143G is the most frequent mutation. A2142C always express a resistant phenotype and A2142G leads to a resitant phenotype only if homozigous. 2) Background: Colonoscopy work-load for endoscopy services is increasing due to colorectal cancer prevention. We tested a combination of faecal tests to improve accuracy and prioritize the access to colonoscopy. Methods: we tested a combination of fecal tests (FOBT, M2-PK and calprotectin) in a group of 280 patients requiring colonoscopy. Results: 47 patients had CRC and 85 had advanced adenoma/s at colonoscopy/histology. In case of single test, for CRC detection FOBT was the test with the highest specificity and PPV, M2-PK had the highest sensitivity and higher NPV. Combination was more interesting in term of PPV. And the best combination of tests was i-FOBT + M2-PK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researches performed during the PhD course intended to assess innovative applications of near-infrared spectroscopy in reflectance (NIR) in the production chain of beer. The purpose is to measure by NIR the "malting quality" (MQ) parameter of barley, to monitor the malting process and to know if a certain type of barley is suitable for the production of beer and spirits. Moreover, NIR will be applied to monitor the brewing process. First of all, it was possible to check the quality of the raw materials like barley, maize and barley malt using a rapid, non-destructive and reliable method, with a low error of prediction. The more interesting result obtained at this level was that the repeatability of the NIR calibration models developed was comparable with the one of the reference method. Moreover, about malt, new kinds of validation were used in order to estimate the real predictive power of the proposed calibration models and to understand the long-term effects. Furthermore, the precision of all the calibration models developed for malt evaluation was estimated and statistically compared with the reference methods, with good results. Then, new calibration models were developed for monitoring the malting process, measuring the moisture content and other malt quality parameters during germination. Moreover it was possible to obtain by NIR an estimate of the "malting quality" (MQ) of barley and to predict whether if its germination will be rapid and uniform and if a certain type of barley is suitable for the production of beer and spirits. Finally, the NIR technique was applied to monitor the brewing process, using correlations between NIR spectra of beer and analytical parameters, and to assess beer quality. These innovative results are potentially very useful for the actors involved in the beer production chain, especially the calibration models suitable for the control of the malting process and for the assessment of the “malting quality” of barley, which need to be deepened in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dass Pflanzen gegen phytopathogene Infektionen resistent sind, ist das Ergebnis von multip-len Abwehrreaktionen. Eine solche ist auch die Hypersensitivitätsreaktion (HR). Sie ist die Folge eines Befalls von Börner mit Rebläusen und zeigt sich an Blättern und Wurzeln der resistenten Unterlagsrebe in Form von lokalen Nekrosen. Die Erzeugung von neuen, trans-genen reblausresistenten Unterlagsreben verlangt präzise Kenntnisse über die Mechanismen der Reblausresistenz. Um Resistenzgene zu identifizieren, wurden im Rahmen dieser Arbeit differenzielle Genexpressionsanalysen eingesetzt. Diese waren die Microarray Analyse mit der Geniom one Technik und die real time (RT) -PCR. Sie erlaubten eine Gegenüberstellung der Genexpression in behandeltem Wurzelgewebe mit der Expression im Normalgewebe der Unterlagsrebe Börner. Als experimenteller Induktor der HR in Börner diente die Indol-3-Essigsäure (IES), ein Bestandteil des Reblausspeichels. Frühere Untersuchungen zur Reb-lausresistenz zeigten, dass bei einer Behandlung mit IAA an Wurzeln von Börner Nekrosen entstehen, nicht jedoch an Wurzeln von der reblaustoleranten Unterlagssorte SO4 oder dem reblausanfälligem Edelreis. Das war der Grund, SO4 und Riesling als Vergleichsobjekte zu Börner für diese Studie auszuwählen. So sollte die Bedeutung der Rolle von IES als Auslö-ser der Resistenzmechanismen in Börner erklärt werden. Insgesamt konnten deutliche Unter-schiede in den Reaktionen der drei Rebsorten auf die IES Behandlung aufgedeckt werden. Während in Börner eine hohe Anzahl an Genen und diese intensiv auf den IES Reiz reagiert, fallen die Gene bei SO4 und Riesling zahlenmäßig kaum ins Gewicht und die Reaktionen der beiden Sorten auf IES zudem eher schwach aus. In der Summe waren es 27 Gene, die für die Reblausresistenz in Börner verantwortlich sein könnten. So konnte eine IES bedingte Aktivierung von Genen beobachtet werden, die bei der Produktion von Phytoalexinen be-deutsam sind, wie z.B. die phenylalanine ammonia-lyase, die lipoxygenase und die stilbene synthase. Weiter ließ sich eine Regulation von allgemein Stress assoziierten Genen und von Zellwandproteinen und eine Induktion von Signalkomponenten, etwa des Transkriptionsfak-tors ethylene response factor, nachweisen. Eine deutliche Hochregulation von Au-xintransportern in den IES behandelten Börnerwurzeln gab zudem Anhaltspunkte auf sorten-spezifische Unterschiede in der zellulären Aufnahme und Abgabe der IES. Durch die Ausar-beitung des Zusammenspiels der durch IES regulierten Gene konnten in dieser Arbeit wert-volle Hinweise auf die Mechanismen der Reblausresistenz in Börner gewonnen werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity characterizing the present thesis was mainly centered on the design, development and validation of methodologies for the estimation of stationary and time-varying connectivity between different regions of the human brain during specific complex cognitive tasks. Such activity involved two main aspects: i) the development of a stable, consistent and reproducible procedure for functional connectivity estimation with a high impact on neuroscience field and ii) its application to real data from healthy volunteers eliciting specific cognitive processes (attention and memory). In particular the methodological issues addressed in the present thesis consisted in finding out an approach to be applied in neuroscience field able to: i) include all the cerebral sources in connectivity estimation process; ii) to accurately describe the temporal evolution of connectivity networks; iii) to assess the significance of connectivity patterns; iv) to consistently describe relevant properties of brain networks. The advancement provided in this thesis allowed finding out quantifiable descriptors of cognitive processes during a high resolution EEG experiment involving subjects performing complex cognitive tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, new tools in atmospheric pollutant sampling and analysis were applied in order to go deeper in source apportionment study. The project was developed mainly by the study of atmospheric emission sources in a suburban area influenced by a municipal solid waste incinerator (MSWI), a medium-sized coastal tourist town and a motorway. Two main research lines were followed. For what concerns the first line, the potentiality of the use of PM samplers coupled with a wind select sensor was assessed. Results showed that they may be a valid support in source apportionment studies. However, meteorological and territorial conditions could strongly affect the results. Moreover, new markers were investigated, particularly focusing on the processes of biomass burning. OC revealed a good biomass combustion process indicator, as well as all determined organic compounds. Among metals, lead and aluminium are well related to the biomass combustion. Surprisingly PM was not enriched of potassium during bonfire event. The second research line consists on the application of Positive Matrix factorization (PMF), a new statistical tool in data analysis. This new technique was applied to datasets which refer to different time resolution data. PMF application to atmospheric deposition fluxes identified six main sources affecting the area. The incinerator’s relative contribution seemed to be negligible. PMF analysis was then applied to PM2.5 collected with samplers coupled with a wind select sensor. The higher number of determined environmental indicators allowed to obtain more detailed results on the sources affecting the area. Vehicular traffic revealed the source of greatest concern for the study area. Also in this case, incinerator’s relative contribution seemed to be negligible. Finally, the application of PMF analysis to hourly aerosol data demonstrated that the higher the temporal resolution of the data was, the more the source profiles were close to the real one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro svolto durante questa tesi di dottorato pone le basi per lo sviluppo di nuove biotecnologie della micorrizazione di piante forestali con tartufi pregiati ed in particolare con Tuber magnatum. Durante questa tesi è stato possibile isolare e mantenere in coltura pura il micelio di T. magnatum, ad ottenere e descrivere le sue micorrize e quelle di altri tartufi “bianchi” (T. oligospermum, T. borchii) e a seguire l’evoluzione del micelio nel suolo utilizzando la tecnica della real time PCR. Sono stati disegnati primer specie specifici in grado di identificare T. oligospermum ed è stata verificata la possibiltà di utilizzare questi primers in PCR multiplex con quelli specifici di T. magnatum e di T. borchii già presenti in bibliografia, al fine di “scovare” sia frodi nella commercializzaione degli ascomi sia eventuali contaminazioni nelle piante micorrizate. Per migliorare lo sviluppo miceliare di tartufo abbiamo si è cercato di migliorare il mezzo nutritivo per la crescita del micelio utilizzando: fonti di carbonio diverse, estratti radicali di nocciolo e singole frazioni separate dagli stessi. Infine sono stati sviluppati protocolli di crioconservazione per miceli di tartufo. Gli estratti radicali sono in grado di stimolare le crescita miceliare del tartufo modello T. borchii e dimodificarne la morfologia ifale. Questo risultati sono stati confermati anche dall’aumento dell’espressione di geni CDC42 e Rho-GDI, due geni legati alla crescita apicale polarizzata delle ife dei funghi filamentosi. Inoltre è stato dimostrato che il mantenimento in coltura per numerosi anni dei miceli di tartufo provoca una perdita della capacità d’infettare le radici delle piante e quindi il loro potenziale utilizzo sia a scopo sperimentale sia a scopo colturale. Questo pone in risalto l’importanza della conservazione a lungo termine del materiale biologico a disposizione ed è stato dimostrato che la crioconservazione è applicabile con successo anche alle specie del genere Tuber.