913 resultados para B formal method
Resumo:
To assess topical delivery studies of glycoalkaloids, an analytical method by HPLC-UV was developed and validated for the determination of solasonine (SN) and solamargine (SM) in different skin layers, as well as in a topical formulation. The method was linear within the ranges 0.86 to 990.00 µg/mL for SN and 1.74 to 1000.00 µg/mL for SM (r = 0.9996). Moreover, the recoveries for both glycoalkaloids were higher than 88.94 and 93.23% from skin samples and topical formulation, respectively. The method developed is reliable and suitable for topical delivery skin studies and for determining the content of SN and SM in topical formulations.
Resumo:
Boiling points (T B) of acyclic alkynes are predicted from their boiling point numbers (Y BP) with the relationship T B(K) = -16.802Y BP2/3 + 337.377Y BP1/3 - 437.883. In turn, Y BP values are calculated from structure using the equation Y BP = 1.726 + Ai + 2.779C + 1.716M3 + 1.564M + 4.204E3 + 3.905E + 5.007P - 0.329D + 0.241G + 0.479V + 0.967T + 0.574S. Here Ai depends on the substitution pattern of the alkyne and the remainder of the equation is the same as that reported earlier for alkanes. For a data set consisting of 76 acyclic alkynes, the correlation of predicted and literature T B values had an average absolute deviation of 1.46 K, and the R² of the correlation was 0.999. In addition, the calculated Y BP values can be used to predict the flash points of alkynes.
Resumo:
We have carried out high contrast imaging of 70 young, nearby B and A stars to search for brown dwarf and planetary companions as part of the Gemini NICI Planet-Finding Campaign. Our survey represents the largest, deepest survey for planets around high-mass stars (≈1.5-2.5 M ☉) conducted to date and includes the planet hosts β Pic and Fomalhaut. We obtained follow-up astrometry of all candidate companions within 400 AU projected separation for stars in uncrowded fields and identified new low-mass companions to HD 1160 and HIP 79797. We have found that the previously known young brown dwarf companion to HIP 79797 is itself a tight (3 AU) binary, composed of brown dwarfs with masses 58$^{+21}_{-20}$ M Jup and 55$^{+20}_{-19}$ M Jup, making this system one of the rare substellar binaries in orbit around a star. Considering the contrast limits of our NICI data and the fact that we did not detect any planets, we use high-fidelity Monte Carlo simulations to show that fewer than 20% of 2 M ☉ stars can have giant planets greater than 4 M Jup between 59 and 460 AU at 95% confidence, and fewer than 10% of these stars can have a planet more massive than 10 M Jup between 38 and 650 AU. Overall, we find that large-separation giant planets are not common around B and A stars: fewer than 10% of B and A stars can have an analog to the HR 8799 b (7 M Jup, 68 AU) planet at 95% confidence. We also describe a new Bayesian technique for determining the ages of field B and A stars from photometry and theoretical isochrones. Our method produces more plausible ages for high-mass stars than previous age-dating techniques, which tend to underestimate stellar ages and their uncertainties.
Resumo:
Synthese und Charakterisierung neuer funktionalisierter Mono- und Bis-tetrahydro-pyrrolo[3,4-b]carbazole als potentielle DNA-Liganden In der Carbazol-Chemie sollen neue anellierte Verbindungen mit potentieller DNA-Affinität und damit verbundener Antitumoraktivität entwickelt werden. Auf molekularer Ebene sind DNA-Interkalation oder DNA-Rinnenbindung zu erwarten. Darauf aufbauend wurden in Anlehnung an literaturbekannte Cytostatika Mono- und Bis-tetrahydropyrrolo[3,4-b]carbazole synthetisiert, die zur Entwicklung neuer Leitstrukturen bzw. -substanzen beitragen können.In der vorliegenden Arbeit wurde als synthetische Schlüsselreaktion die in unserem Arbeitkreis etablierte Indol-2,3-chinodimethan-Diels-Alder-Reaktion mit geeigneten cyclischen Mono- und Bismaleinimiden als Dienophilen weiterführend genutzt. Auf Grund des Aufbaus von künftigen Struktur-Wirkungsbeziehungen wurden variable Linker zwischen die beiden zu verbindenden Pyrrolotetrahydrocarbazole eingeführt. Diese waren aliphatischer und diamidischer Natur. Diamidische Strukturelemente wurden im Hinblick auf die Entwicklung neuer Peptidomimetika eingeführt. Deren Synthese gelang zum einen über die gemischte Säureanhydrid-Methode und zum anderen über die Azolid-Methode. Die Struktursicherung der als Cycloaddukte erhaltenen Tetrahydrocarbazole erfolgte mittels Standardverfahren (1D-, 2D-NMR-, IR-Spektroskopie und Massenspektrometrie).Enantiomere bzw. Diastereomere chiraler Wirkstoffe unterscheiden sich stark in ihren pharmakologischen Eigenschaften, deshalb müssen Verfahren entwickelt werden, um diese Substanzen gegebenenfalls auch in enantiomerenreiner Form darstellen zu können. Die Racemate der Monotetrahydrocarbazole und die Racemate sowie die dazu diastereomeren meso-Formen der Bistetrahydrocarbazole, die bei der Reaktion entstehen, konnten erstmals mittels chiraler HPLC analytisch getrennt werden.In einer der Synthese ergänzten theoretischen Studie wurde Computer-Molecular-Modelling zur Problematik der Diels-Alder-Reaktion durchgeführt, außerdem wurden kraftfeld-mechanische Berechnungen zur Konformationsanalyse der 'einfachen' Monotetrahydro-carbazole herangezogen und darauf aufbauend schließlich einfache DNA-Docking-Experimente zur ersten Abschätzung des DNA-Binde-Verhaltens der synthetisierten Verbindungen vorgenommen.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
Die zentrale Funktion des Hauptlichtsammlerkomplexes des Photosystems II, LHCII, besteht in der Absorption von Sonnenlicht und der Bereitstellung von Energie für die photosynthetische Ladungstrennung im Reaktionszentrum des Photosystems. Auch in der Regulation der Photosynthese spielt der LHCII eine wichtige Rolle, da die Energieverteilung zwischen Photosystem I und Photosystem II im Rahmen des sog. „State Transition“-Prozesses über die Verteilung der Lichtsammlerkomplexe zwischen den beiden Photosystemen gesteuert wird. Im Blickfeld des ersten Teils dieser Arbeit stand die konformative Dynamik der N-terminalen Domäne des LHCII, die wahrscheinlich in die Regulation der Lichtsammlung involviert ist. Gemeinsam mit Mitarbeitern des 3. Physikalischen Instituts der Universität Stuttgart wurde an der Etablierung einer Methode zur einzelmolekülspektroskopischen Untersuchung der Dynamik des N-Terminus gearbeitet. Als Messgröße diente der Energietransfer zwischen einem Fluoreszenzfarbstoff, der an die N-terminale Domäne gekoppelt war, und den Chlorophyllen des Komplexes. Die Funktion des LHCII als effiziente Lichtantenne bildete die Grundlage für den zweiten Teil dieser Arbeit. Hier wurde untersucht, in wie weit LHCII als Lichtsammler in eine elektrochemische Solarzelle integriert werden kann. In der potentiellen Solarzelle sollte die Anregungsenergie des LHCII auf Akzeptorfarbstoffe übertragen werden, die in der Folge Elektronen in das Leitungsband einer aus Titandioxid oder Zinndioxid bestehenden porösen Halbleiterelektrode injizierten, auf der Komplexe und Farbstoffe immobilisiert waren.
Resumo:
Das Standardmodell (SM) der Teilchenphysik beschreibt sehr präzise die fundamentalen Bausteine und deren Wechselwirkungen (WW). Trotz des Erfolges gibt es noch offene Fragen, die vom SM nicht beantwortet werden können. Ein noch noch nicht abgeschlossener Test besteht aus der Messung der Stärke der schwachen Kopplung zwischen Quarks. Neutrale B- bzw. $bar{B}$-Mesonen können sich innerhalb ihrer Lebensdauer über einen Prozeß der schwachen WW in ihr Antiteilchen transformieren. Durch die Messung der Bs-Oszillation kann die Kopplung Vtd zwischen den Quarksorten Top (t) und Down (d) bestimmt werden. Alle bis Ende 2005 durchgeführten Experimente lieferten lediglich eine untere Grenze für die Oszillationsfrequenz von ms>14,4ps-1. Die vorliegenden Arbeit beschreibt die Messung der Bs-Oszillationsfrequenz ms mit dem semileptonischen Kanal BsD(-)+. Die verwendeten Daten stammen aus Proton-Antiproton-Kollisionen, die im Zeitraum von April 2002 bis März 2006 mit dem DØ-Detektor am Tevatron-Beschleuniger des Fermi National Accelerator Laboratory bei einer Schwerpunktsenergie von $sqrt{s}$=1,96TeV aufgezeichnet wurden. Die verwendeten Datensätze entsprechen einer integrierten Luminosität von 1,3fb-1 (620 millionen Ereignisse). Für diese Oszillationsmessung wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion sowie des Zerfalls bestimmt und die Zerfallszeit wurde gemessen. Nach der Rekonstruktion und Selektion der Signalereignisse legt die Ladung des Myons den Quarkinhalt des Bs-Mesons zur Zeit des Zerfalls fest. Zusätzlich wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion markiert. b-Quarks werden in $pbar{p}$-Kollisionen paarweise produziert. Die Zerfallsprodukte des zweiten b-Hadrons legen den Quarkinhalt des Bs-Mesons zur Zeit der Produktion fest. Bei einer Sensitivität von msenss=14,5ps-1 wurde eine untere Grenze für die Oszillationsfrequenz ms>15,5ps-1 bestimmt. Die Maximum-Likelihood-Methode lieferte eine Oszillationsfrequenz ms>(20+2,5-3,0(stat+syst)0,8(syst,k))ps-1 bei einem Vertrauensniveau von 90%. Der nicht nachgewiesene Neutrinoimpuls führt zu dem systematischen Fehler (sys,k). Dieses Resultat ergibt zusammen mit der entsprechenden Oszillation des Bd-Mesons eine signifikante Messung der Kopplung Vtd, in Übereinstimmung mit weiteren Experimenten über die schwachen Quarkkopplungen.
Resumo:
The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.
Resumo:
The proton-nucleus elastic scattering at intermediate energies is a well-established method for the investigation of the nuclear matter distribution in stable nuclei and was recently applied also for the investigation of radioactive nuclei using the method of inverse kinematics. In the current experiment, the differential cross sections for proton elastic scattering on the isotopes $^{7,9,10,11,12,14}$Be and $^8$B were measured. The experiment was performed using the fragment separator at GSI, Darmstadt to produce the radioactive beams. The main part of the experimental setup was the time projection ionization chamber IKAR which was simultaneously used as hydrogen target and a detector for the recoil protons. Auxiliary detectors for projectile tracking and isotope identification were also installed. As results from the experiment, the absolute differential cross sections d$sigma$/d$t$ as a function of the four momentum transfer $t$ were obtained. In this work the differential cross sections for elastic p-$^{12}$Be, p-$^{14}$Be and p-$^{8}$B scattering at low $t$ ($t leq$~0.05~(GeV/c)$^2$) are presented. The measured cross sections were analyzed within the Glauber multiple-scattering theory using different density parameterizations, and the nuclear matter density distributions and radii of the investigated isotopes were determined. The analysis of the differential cross section for the isotope $^{14}$Be shows that a good description of the experimental data is obtained when density distributions consisting of separate core and halo components are used. The determined {it rms} matter radius is $3.11 pm 0.04 pm 0.13$~fm. In the case of the $^{12}$Be nucleus the results showed an extended matter distribution as well. For this nucleus a matter radius of $2.82 pm 0.03 pm 0.12$~fm was determined. An interesting result is that the free $^{12}$Be nucleus behaves differently from the core of $^{14}$Be and is much more extended than it. The data were also compared with theoretical densities calculated within the FMD and the few-body models. In the case of $^{14}$Be, the calculated cross sections describe the experimental data well while, in the case of $^{12}$Be there are discrepancies in the region of high momentum transfer. Preliminary experimental results for the isotope $^8$B are also presented. An extended matter distribution was obtained (though much more compact as compared to the neutron halos). A proton halo structure was observed for the first time with the proton elastic scattering method. The deduced matter radius is $2.60pm 0.02pm 0.26$~fm. The data were compared with microscopic calculations in the frame of the FMD model and reasonable agreement was observed. The results obtained in the present analysis are in most cases consistent with the previous experimental studies of the same isotopes with different experimental methods (total interaction and reaction cross section measurements, momentum distribution measurements). For future investigation of the structure of exotic nuclei a universal detector system EXL is being developed. It will be installed at the NESR at the future FAIR facility where higher intensity beams of radioactive ions are expected. The usage of storage ring techniques provides high luminosity and low background experimental conditions. Results from the feasibility studies of the EXL detector setup, performed at the present ESR storage ring, are presented.
Resumo:
Reactive halogen compounds are known to play an important role in a wide variety of atmospheric processes such as atmospheric oxidation capacity and coastal new particle formation. In this work, novel analytical approaches combining diffusion denuder/impinger sampling techniques with gas chromatographic–mass spectrometric (GC–MS) determination are developed to measure activated chlorine compounds (HOCl and Cl2), activated bromine compounds (HOBr, Br2, BrCl, and BrI), activated iodine compounds (HOI and ICl), and molecular iodine (I2). The denuder/GC–MS methods have been used to field measurements in the marine boundary layer (MBL). High mixing ratios (of the order of 100 ppt) of activated halogen compounds and I2 are observed in the coastal MBL in Ireland, which explains the ozone destruction observed. The emission of I2 is found to correlate inversely with tidal height and correlate positively with the levels of O3 in the surrounding air. In addition the release is found to be dominated by algae species compositions and biomass density, which proves the “hot-spot” hypothesis of atmospheric iodine chemistry. The observations of elevated I2 concentrations substantially support the existence of higher concentrations of littoral iodine oxides and thus the connection to the strong ultra-fine particle formation events in the coastal MBL.
Resumo:
The development and the growth of plants is strongly affected by the interactions between roots, rootrnassociated organisms and rhizosphere communities. Methods to assess such interactions are hardly torndevelop particularly in perennial and woody plants, due to their complex root system structure and theirrntemporal change in physiology patterns. In this respect, grape root systems are not investigated veryrnwell. The aim of the present work was the development of a method to assess and predict interactionsrnat the root system of rootstocks (Vitis berlandieri x Vitis riparia) in field. To achieve this aim, grapernphylloxera (Daktulosphaira vitifoliae Fitch, Hemiptera, Aphidoidea) was used as a graperoot parasitizingrnmodel.rnTo develop the methodical approach, a longt-term trial (2006-2009) was arranged on a commercial usedrnvineyard in Geisenheim/Rheingau. All 2 to 8 weeks the top most 20 cm of soil under the foliage wallrnwere investigated and root material was extracted (n=8-10). To include temporal, spatial and cultivarrnspecific root system dynamics, the extracted root material was analyzed digitally on the morphologicalrnproperties. The grape phylloxera population was quantified and characterized visually on base of theirrnlarvalstages (oviparous, non oviparous and winged preliminary stages). Infection patches (nodosities)rnwere characterized visually as well, partly supported by digital root color analyses. Due to the knownrneffects of fungal endophytes on the vitality of grape phylloxera infested grapevines, fungal endophytesrnwere isolated from nodosity and root tissue and characterized (morphotypes) afterwards. Further abioticrnand biotic soil conditions of the vineyards were assessed. The temporal, spatial and cultivar specificrnsensitivity of single parameters were analyzed by omnibus tests (ANOVAs) and adjacent post-hoc tests.rnThe relations between different parameters were analyzed by multiple regression models.rnQuantitative parameters to assess the degeneration of nodosity, the development nodosity attachedrnroots and to differentiate between nodosities and other root swellings in field were developed. Significantrndifferences were shown between root dynamic including parameters and root dynamic ignoringrnparameters. Regarding the description of grape phylloxera population and root system dynamic, thernmethod showed a high temporal, spatial and cultivar specific sensitivity. Further, specific differencesrncould be shown in the frequency of endophyte morphotypes between root and nodosity tissue as wellrnas between cultivars. Degeneration of nodosities as well as nodosity occupation rates could be relatedrnto the calculated abundances of grape phylloxera population. Further ecological questions consideringrngrape root development (e.g. relation between moisture and root development) and grape phylloxerarnpopulation development (e.g. relation between temperature and population structure) could be answeredrnfor field conditions.rnGenerally, the presented work provides an approach to evaluate vitality of grape root systems. Thisrnapproach can be useful, considering the development of control strategies against soilborne pests inrnviticulture (e.g. grape phylloxera, Sorospheara viticola, Roesleria subterranea (Weinm.) Redhaed) as well as considering the evaluation of integrated management systems in viticulture.
Resumo:
Monoclonal antibodies have emerged as one of the most promising therapeutics in oncology over the last decades. The generation of fully human tumorantigen-specific antibodies suitable for anti-tumor therapy is laborious and difficult to achieve. Autoreactive B cells expressing those antibodies are detectable in cancer patients and represent a suitable source for human antibodies. However, the isolation and cultivation of this cell type is challenging. A novel method was established to identify antigen-specific B cells. The method is based on the conversion of the antigen independent CD40 signal into an antigen-specific one. For that, the artificial fusion proteins ABCos1 and ABCos2 (Antigen-specific B cell co-stimulator) were generated, which consist of an extracellular association-domain derived from the constant region of the human immunoglobulin (Ig) G1, a transmembrane fragment and an intracellular signal transducer domain derived of the cytoplasmic domain of the human CD40 receptor. By the association with endogenous Ig molecules the heterodimeric complex allows the antigen-specific stimulation of both the BCR and CD40. In this work the ability of the ABCos constructs to associate with endogenous IgG molecules was shown. Moreover, crosslinking of ABCos stimulates the activation of NF-κB in HEK293-lucNifty and induces proliferation in B cells. The stimulation of ABCos in transfected B cells results in an activation pattern different from that induced by the conventional CD40 signal. ABCos activated B cells show a mainly IgG isotype specific activation of memory B cells and are characterized by high proliferation and the differentiation into plasma cells. To validate the approach a model system was conducted: B cells were transfected with IVT-RNA encoding for anti-Plac1 B cell receptor (antigen-specific BCR), ABCos or both. The stimulation with the BCR specific Plac1 peptide induces proliferation only in the cotransfected B cell population. Moreover, we tested the method in human IgG+ memory B cells from CMV infected blood donors, in which the stimulation of ABCos transfected B cells with a CMV peptide induces antigen-specific expansion. These findings show that challenging ABCos transfected B cells with a specific antigen results in the activation and expansion of antigen-specific B cells and not only allows the identification but also cultivation of these B cells. The described method will help to identify antigen-specific B cells and can be used to characterize (tumor) autoantigen-specific B cells and allows the generation of fully human antibodies that can be used as diagnostic tool as well as in cancer therapy.
Resumo:
Ziel dieser Arbeit war der Aufbau und Einsatz des Atmosphärischen chemischen Ionisations-Massenspektrometers AIMS für boden- und flugzeuggetragene Messungen von salpetriger Säure (HONO). Für das Massenspektrometer wurden eine mit Gleichspannung betriebene Gasentladungsionenquelle und ein spezielles Druckregelventil entwickelt. Während der Instrumentenvergleichskampagne FIONA (Formal Intercomparisons of Observations of Nitrous Acid) an einer Atmosphären-Simulationskammer in Valencia (Spanien) wurde AIMS für HONO kalibriert und erstmals eingesetzt. In verschiedenen Experimenten wurden HONO-Mischungsverhältnisse zwischen 100 pmol/mol und 25 nmol/mol erzeugt und mit AIMS interferenzfrei gemessen. Innerhalb der Messunsicherheit von ±20% stimmen die massenspektrometrischen Messungen gut mit den Methoden der Differenziellen Optischen Absorptions-Spektrometrie und der Long Path Absorption Photometrie überein. Die Massenspektrometrie kann somit zum schnellen und sensitiven Nachweis von HONO in verschmutzter Stadtluft und in Abgasfahnen genutzt werden.rnErste flugzeuggetragene Messungen von HONO mit AIMS wurden 2011 bei der Messkampagne CONCERT (Contrail and Cirrus Experiment) auf dem DLR Forschungsflugzeug Falcon durchgeführt. Hierbei konnte eine Nachweisgrenze von < 10 pmol/mol (3σ, 1s) erreicht werden. Bei Verfolgungsflügen wurden im jungen Abgasstrahl von Passagierflugzeugen molare HONO zu Stickoxid-Verhältnisse (HONO/NO) von 2.0 bis 2.5% gemessen. HONO wird im Triebwerk durch die Reaktion von NO mit OH gebildet. Ein gemessener abnehmender Trend der HONO/NO Verhältnisse mit zunehmendem Stickoxid-Emissionsindex wurde bestätigt und weist auf eine OH Limitierung im jungen Abgasstrahl hin.rnNeben den massenspektrometrischen Messungen wurden Flugzeugmessungen der Partikelsonde Forward Scattering Spectrometer Probe FSSP-300 in jungen Kondensstreifen ausgewertet und analysiert. Aus den gemessenen Partikelgrößenverteilungen wurden Extinktions- und optische Tiefe-Verteilungen abgeleitet und für die Untersuchung verschiedener wissenschaftlicher Fragestellungen, z.B. bezüglich der Partikelform in jungen Kondensstreifen und ihrer Klimawirkung, zur Verfügung gestellt. Im Rahmen dieser Arbeit wurde der Einfluss des Flugzeug- und Triebwerktyps auf mikrophysikalische und optische Eigenschaften von Kondensstreifen untersucht. Unter ähnlichen meteorologischen Bedingungen bezüglich Feuchte, Temperatur und stabiler thermischer Schichtung wurden 2 Minuten alte Kondensstreifen der Passagierflugzeuge vom Typ A319-111, A340-311 und A380-841 verglichen. Im Rahmen der Messunsicherheit wurde keine Änderung des Effektivdurchmessers der Partikelgrößenverteilungen gefunden. Hingegen nehmen mit zunehmendem Flugzeuggewicht die Partikelanzahldichte (162 bis 235 cm-3), die Extinktion (2.1 bis 3.2 km-1), die Absinktiefe des Kondensstreifens (120 bis 290 m) und somit die optische Tiefe der Kondensstreifen (0.25 bis 0.94) zu. Der gemessene Trend wurde durch Vergleich mit zwei unabhängigen Kondensstreifen-Modellen bestätigt. Mit den Messungen wurde eine lineare Abhängigkeit der totalen Extinktion (Extinktion mal Querschnittsfläche des Kondensstreifens) vom Treibstoffverbrauch pro Flugstrecke gefunden und bestätigt.
Resumo:
Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.