957 resultados para Planar jet, hot-wire anemometry, calibration procedure, experiments for the characterization
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
CONCLUSIONS The focus of this work was the investigation ofanomalies in Tg and dynamics at polymer surfaces. Thethermally induced decay of hot-embossed polymer gratings isstudied using laser-diffraction and atomic force microscopy(AFM). Monodisperse PMMA and PS are selected in the Mwranges of 4.2 to 65.0 kg/mol and 3.47 to 65.0 kg/mol,respectively. Two different modes of measurement were used:the one mode uses temperature ramps to obtain an estimate ofthe near-surface glass temperature, Tdec,0; the other modeinvestigates the dynamics at a constant temperature aboveTg. The temperature-ramp experiments reveal Tdec,0 valuesvery close to the Tg,bulk values, as determined bydifferential scanning calorimetry (DSC). The PMMA of65.0 kg/mol shows a decreased value of Tg, while the PS samples of 3.47 and 10.3 kg/mol (Mw
Resumo:
Traditional procedures for rainfall-runoff model calibration are generally based on the fit of the individual values of simulated and observed hydrographs. It is used here an alternative option that is carried out by matching, in the optimisation process, a set of statistics of the river flow. Such approach has the additional, significant advantage to allow also a straightforward regional calibration of the model parameters, based on the regionalisation of the selected statistics. The minimisation of the set of objective functions is carried out by using the AMALGAM algorithm, leading to the identification of behavioural parameter sets. The procedure is applied to a set of river basins located in central Italy: the basins are treated alternatively as gauged and ungauged and, as a term of comparison, the results obtained with a traditional time-domain calibration is also presented. The results show that a suitable choice of the statistics to be optimised leads to interesting results in real world case studies as far as the reproduction of the different flow regimes is concerned.
Resumo:
The aim of this thesis was to investigate novel techniques to create complex hierarchical chemical patterns on silica surfaces with micro to nanometer sized features. These surfaces were used for a site-selective assembly of colloidal particles and oligonucleotides. To do so, functionalised alkoxysilanes (commercial and synthesised ones) were deposited onto planar silica surfaces. The functional groups can form reversible attractive interactions with the complementary surface layers of the opposing objects that need to be assembled. These interactions determine the final location and density of the objects onto the surface. Photolithographically patterned silica surfaces were modified with commercial silanes, in order to create hydrophilic and hydrophobic regions on the surface. Assembly of hydrophobic silica particles onto these surfaces was investigated and finally, pH and charge effects on the colloidal assembly were analysed. In the second part of this thesis the concept of novel, "smart" alkoxysilanes is introduced that allows parallel surface activation and patterning in a one-step irradiation process. These novel species bear a photoreactive head-group in a protected form. Surface layers made from these molecules can be irradiated through a mask to remove the protecting group from selected regions and thus generate lateral chemical patterns of active and inert regions on the substrate. The synthesis of an azide-reactive alkoxysilane was successfully accomplished. Silanisation conditions were carefully optimised as to guarantee a smooth surface layer, without formation of micellar clusters. NMR and DLS experiments corroborated the absence of clusters when using neither water nor NaOH as catalysts during hydrolysis, but only the organic solvent itself. Upon irradiation of the azide layer, the resulting nitrene may undergo a variety of reactions depending on the irradiation conditions. Contact angle measurements demonstrated that the irradiated surfaces were more hydrophilic than the non-irradiated azide layer and therefore the formation of an amine upon irradiation was postulated. Successful photoactivation could be demonstrated using condensation patterns, which showed a change in wettability on the wafer surface upon irradiation. Colloidal deposition with COOH functionalised particles further underlined the formation of more hydrophilic species. Orthogonal photoreactive silanes are described in the third part of this thesis. The advantage of orthogonal photosensitive silanes is the possibility of having a coexistence of chemical functionalities homogeneously distributed in the same layer, by using appropriate protecting groups. For this purpose, a 3',5'-dimethoxybenzoin protected carboxylic acid silane was successfully synthesised and the kinetics of its hydrolysis and condensation in solution were analysed in order to optimise the silanisation conditions. This compound was used together with a nitroveratryl protected amino silane to obtain bicomponent surface layers. The optimum conditions for an orthogonal deprotection of surfaces modified with this two groups were determined. A 2-step deprotection process through a mask generated a complex pattern on the substrate by activating two different chemistries at different sites. This was demonstrated by colloidal adsorption and fluorescence labelling of the resulting substrates. Moreover, two different single stranded oligodeoxynucleotides were immobilised onto the two different activated areas and then hybrid captured with their respective complementary, fluorescent labelled strand. Selective hybridisation could be shown, although non-selective adsorption issues need to be resolved, making this technique attractive for possible DNA microarrays.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
Die Lichtsammelantenne des PSI (LHCI) ist hinsichtlich ihrer Protein- und Pigmentzusammensetzung weniger gut untersucht als die des PSII. Im Rahmen dieser Arbeit wurde deshalb zunächst die Isolation von nativen LHCI-Subkomplexen optimiert und deren Pigmentzusammensetzung untersucht. Zusätzlich wurde die Pigmentbindung analysiert sowie das Pigment/Protein-Verhältnis bestimmt. Die Analyse der Proteinzusammensetzung des LHCI erfolgte mittels einer Kombination aus ein- oder zweidimensionaler Gelelektrophorese mit Westernblotanalysen mit Lhca-Protein-spezifischen Antikörpern und massenspektrometrischen Untersuchungen. Dabei stellte sich heraus, dass der LHCI mehr Proteine bzw. Proteinisoformen enthält als bisher vermutet. So gelang durch die massenspektrometrischen Untersuchungen die Identifizierung zweier bisher noch nicht nachgewiesener Lhca-Proteine. Bei diesen handelt es sich um eine Isoform des Lhca4 und ein zusätzliches Lhca-Protein, das Tomaten-Homolog des Lhca5 von Arabidopsis thaliana. Außerdem wurden in 1D-Gelen Isoformen von Lhca-Proteinen mit unterschiedlichem elektrophoretischen Verhalten beobachtet. In 2D-Gelen trat zusätzlich eine große Anzahl an Isoformen mit unterschiedlichen isoelektrischen Punkten auf. Es ist zu vermuten, dass zumindest ein Teil dieser Isoformen physiologischen Ursprungs ist, und z.B. durch differentielle Prozessierung oder posttranslationale Modifikationen verursacht wird, wenn auch die Spotvielfalt in 2D-Gelen wohl eher auf die Probenaufbereitung zurückzuführen ist. Mittels in vitro-Rekonstitution mit anschließenden biochemischen Untersuchungen und Fluoreszenzmessungen wurde nachgewiesen, dass Lhca5 ein funktioneller LHC mit spezifischen Pigmentbindungseigenschaften ist. Außerdem zeigten in vitro-Dimerisierungsexperimente eine Interaktion zwischen Lhca1 und Lhca5, wodurch dessen Zugehörigkeit zur Antenne des PSI gestützt wird. In vitro-Dimerisierungsexperimente mit Lhca2 und Lhca3 führten dagegen nicht zur Bildung von Dimeren. Dies zeigt, dass die Interaktion in potentiellen Homo- oder Heterodimeren aus Lhca2 und/oder Lhca3 schwächer ist als die zwischen Lhca1 und Lhca4 oder Lhca5. Die beobachtete Proteinheterogenität deutet daraufhin, dass die Antenne des PSI eine komplexere Zusammensetzung hat als bisher angenommen. Für die Integration „neuer“ LHC in den PSI-LHCI-Holokomplex werden zwei Modelle vorgeschlagen: geht man von einer festen Anzahl von LHCI-Monomeren aus, so kann sie durch den Austausch einzelner LHC-Monomere erreicht werden. Als zweites Szenario ist die Bindung zusätzlicher LHC vorstellbar, die entweder indirekt über bereits vorhandene LHC oder direkt über PSI-Kernuntereinheiten mit dem PSI interagieren. In Hinblick auf die Pigmentbindung der nativen LHCI-Subfraktionen konnte gezeigt werden, dass sie Pigmente in einer spezifischen Stöchiometrie und Anzahl binden, und sich vom LHCIIb vor allem durch eine verstärkte Bindung von Chlorophyll a, eine geringere Anzahl von Carotinoiden und die Bindung von ß-Carotin an Stelle von Neoxanthin unterscheiden. Der Vergleich von nativem LHCI mit rekonstituierten Lhca-Proteinen ergab, dass Lhca-Proteine Pigmente in einer spezifischen Stöchiometrie binden, und dass sie Carotinoidbindungsstellen mit flexiblen Bindungseigenschaften besitzen. Auch über die Umwandlung des an die einzelnen Lhca-Proteine gebundenen Violaxanthins (Vio) im Xanthophyllzyklus war nur wenig bekannt. Deshalb wurden mit Hilfe eines in vitro-Deepoxidationssystems sowohl native als auch rekonstituierte LHCI hinsichtlich ihrer Deepoxidationseigenschaften untersucht und der Deepoxidationsgrad von in vivo deepoxidierten Pigment-Protein-Komplexen bestimmt. Aus den Deepoxidationsexperimenten konnte abgeleitet werden, dass in den verschiedenen Lhca-Proteinen unterschiedliche Carotinoidbindungsstellen besetzt sind. Außerdem bestätigten diese Experimente, dass der Xanthophyllzyklus auch im LHCI auftritt, wobei jedoch ein niedrigerer Deepoxidationsgrad erreicht wird als bei LHCII. Dies konnte durch in vitro-Deepoxidationsversuchen auf eine geringere Deepoxidierbarkeit des von Lhca1 und Lhca2 gebundenen Vio zurückgeführt werden. Damit scheint Vio in diesen Lhca-Proteinen eher eine strukturelle Rolle zu übernehmen. Eine photoprotektive Funktion von Zeaxanthin im PSI wäre folglich auf Lhca3 und Lhca4 beschränkt. Damit enthält jede LHCI-Subfraktion ein LHC-Monomer mit langwelliger Fluoreszenz, das möglicherweise am Lichtschutz beteiligt ist. Insgesamt zeigten die Untersuchungen der Pigmentbindung, der Deepoxidierung und der Fluoreszenzeigenschaften, dass sich die verschiedenen Lhca-Proteine in einem oder mehreren dieser Parameter unterscheiden. Dies lässt vermuten, dass schon durch leichte Veränderungen in der Proteinzusammensetzung des LHCI eine Anpassung an unterschiedliche Licht-verhältnisse erreicht werden kann.
Resumo:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
Resumo:
Das Hauptziel der vorliegenden Arbeit war die Entwicklung eines Experimentaufbaus für die elektrochemische Abscheidung von Transactiniden mit anschließender Detektion. Zu diesem Zweck wurden Experimente mit den Homologen dieser Elemente durchgeführt. Die Elektrodeposition von Tracermengen an Fremdelektroden führt zu einer Elektrodenbedeckung von weniger als einer Monolage. Die erforderlichen Abscheidepotentiale sind häufig positiver, als nach der Nernst’schen Gleichung zu erwarten ist. Dieses Phänomen nennt man Unterpotentialabscheidung. In zahlreichen Versuchen mit Radiotracern wurde die Abscheideausbeute als Funktion des Elektrodenpotentials bestimmt, wobei abzuscheidendes Ion, Elektrodenmaterial und Elektrolyt variiert wurden. Es wurden kritische Potentiale, bei denen eine nennenswerte Abscheidung gerade begann, ermittelt sowie Potentiale für die Abscheidung von 50 % der in der Lösung befindlichen Atome. Diese Werte wurden mit theoretisch vorhergesagten Potentialen und Werten aus der Literatur verglichen. Die Abscheidung von Pb als Homologem von Element 114 funktionierte sehr gut an Elektroden aus Palladium oder palladinierten Nickelelektroden unter Verwendung von 0,1 M HCl als Elektrolyt. Zur Charakterisierung der Unterpotentialabscheidung wurde neben der Radiotracer-Methode auch die Cyclovoltammetrie eingesetzt. Hier findet die Abscheidung der ersten Monolage auf der Elektrode ebenfalls häufig bei positiveren Potentialen statt, als die der Hauptmenge. Die mit beiden Methoden ermittelten Werte wurden einander gegenübergestellt. Die Elektrodeposition von kurzlebigen Isotopen muss sehr schnell erfolgen. Es konnte gezeigt werden, dass eine hohe Temperatur und damit verbunden eine niedrige Viskosität des Elektrolyten die Abscheidung beschleunigt. Ebenfalls wichtig ist ein gutes Rühren der Lösung, um eine kleine Nernst’sche Diffusionsschichtdicke zu erzielen. Das Verhältnis von Elektrodenfläche zu Elektrolytvolumen muss möglichst groß sein. Auf der Grundlage dieser Ergebnisse wurde eine für schnelle Elektrolysen optimierte Elektrolysezelle entwickelt. Unter Einsatz dieser Zelle wurden die Abscheidegeschwindigkeiten für verschiedene Ionen- und Elektrodenkombinationen gemessen. Es wurden Experimente zur Kopplung von Gasjet und Elektrolysezelle durchgeführt, dabei wurde sowohl mit am Reaktor erzeugten Spaltprodukten, mit Pb-Isotopen aus einer emanierenden Quelle und mit am Beschleuniger erzeugten Isotopen gearbeitet. Mit den dort gewonnenen Erkenntnissen wurde ein Experimentaufbau für die kontinuierliche Abscheidung und Detektion von kurzlebigen Isotopen realisiert. Am Beschleuniger wurden u. a. kurzlebige Hg- und Pb-Isotope erzeugt und mit einem Gasjet aus der Targetkammer zum ALOHA-System transportiert. Dort wurden sie in einem quasi-kontinuierlichen Prozess in die wässrige Phase überführt und zu einer Elektrolyszelle transportiert. In dieser erfolgte die Elektrodeposition auf eine bandförmige Elektrode aus Nickel oder palladiniertem Nickel. Nach der Abscheidung wurde das Band zu einer Detektorphalanx gezogen, wo der -Zerfall der neutronenarmen Isotope registriert wurde. Es wurden charakteristische Größen wie die Abscheidegeschwindigkeit und die Gesamtausbeute der Elektrolyse ermittelt. Das System wurde im Dauerbetrieb getestet. Es konnte gezeigt werden, dass der gewählte Aufbau prinzipiell für die Abscheidung von kurzlebigen, am Beschleuniger erzeugten Isotopen geeignet ist. Damit ist eine wichtige Voraussetzung für den zukünftigen Einsatz der Methode zum Studium der chemischen Eigenschaften der superschweren Elemente geschaffen.
Resumo:
In the thesis is presented the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam, a muon neutrino beam produced at CERN. The OPERA detector observes muon neutrinos 730 km away from the source. Previous measurements of the neutrino velocity have been performed by other experiments. Since the OPERA experiment aims the direct observation of muon neutrinos oscillations into tau neutrinos, a higher energy beam is employed. This characteristic together with the higher number of interactions in the detector allows for a measurement with a much smaller statistical uncertainty. Moreover, a much more sophisticated timing system (composed by cesium clocks and GPS receivers operating in “common view mode”), and a Fast Waveform Digitizer (installed at CERN and able to measure the internal time structure of the proton pulses used for the CNGS beam), allows for a new measurement with a smaller systematic error. Theoretical models on Lorentz violating effects can be investigated by neutrino velocity measurements with terrestrial beams. The analysis has been carried out with blind method in order to guarantee the internal consistency and the goodness of each calibration measurement. The performed measurement is the most precise one done with a terrestrial neutrino beam, the statistical accuracy achieved by the OPERA measurement is about 10 ns and the systematic error is about 20 ns.
Resumo:
In this present work high quality PMMA opals with different sphere sizes, silica opals from large size spheres, multilayer opals, and inverse opals were fabricated. Highly monodisperse PMMA spheres were synthesized by surfactant-free emulsion polymerization (polydispersity ~2%). Large-area and well-ordered PMMA crystalline films with a homogenous thickness were produced by the vertical deposition method using a drawing device. Optical experiments have confirmed the high quality of these PMMA photonic crystals, e.g., well resolved high-energy bands of the transmission and reflectance spectra of the opaline films were observed. For fabrication of high quality opaline photonic crystals from large silica spheres (diameter of 890 nm), self-assembled in patterned Si-substrates a novel technique has been developed, in which the crystallization was performed by using a drawing apparatus in combination with stirring. The achievements comprise a spatial selectivity of opal crystallization without special treatment of the wafer surface, the opal lattice was found to match the pattern precisely in width as well as depth, particularly an absence of cracks within the size of the trenches, and finally a good three-dimensional order of the opal lattice even in trenches with a complex confined geometry. Multilayer opals from opaline films with different sphere sizes or different materials were produced by sequential crystallization procedure. Studies of the transmission in triple-layer hetero-opal revealed that its optical properties cannot only be considered as the linear superposition of two independent photonic bandgaps. The remarkable interface effect is the narrowing of the transmission minima. Large-area, high-quality, and robust photonic opal replicas from silicate-based inorganic-organic hybrid polymers (ORMOCER® s) were prepared by using the template-directed method, in which a high quality PMMA opal template was infiltrated with a neat inorganic-organic ORMOCER® oligomer, which can be photopolymerized within the opaline voids leading to a fully-developed replica structure with a filling factor of nearly 100%. This opal replica is structurally homogeneous, thermally and mechanically stable and the large scale (cm2 size) replica films can be handled easily as free films with a pair of tweezers.
Resumo:
The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.
Resumo:
Computer simulations have become an important tool in physics. Especially systems in the solid state have been investigated extensively with the help of modern computational methods. This thesis focuses on the simulation of hydrogen-bonded systems, using quantum chemical methods combined with molecular dynamics (MD) simulations. MD simulations are carried out for investigating the energetics and structure of a system under conditions that include physical parameters such as temperature and pressure. Ab initio quantum chemical methods have proven to be capable of predicting spectroscopic quantities. The combination of these two features still represents a methodological challenge. Furthermore, conventional MD simulations consider the nuclei as classical particles. Not only motional effects, but also the quantum nature of the nuclei are expected to influence the properties of a molecular system. This work aims at a more realistic description of properties that are accessible via NMR experiments. With the help of the path integral formalism the quantum nature of the nuclei has been incorporated and its influence on the NMR parameters explored. The effect on both the NMR chemical shift and the Nuclear Quadrupole Coupling Constants (NQCC) is presented for intra- and intermolecular hydrogen bonds. The second part of this thesis presents the computation of electric field gradients within the Gaussian and Augmented Plane Waves (GAPW) framework, that allows for all-electron calculations in periodic systems. This recent development improves the accuracy of many calculations compared to the pseudopotential approximation, which treats the core electrons as part of an effective potential. In combination with MD simulations of water, the NMR longitudinal relaxation times for 17O and 2H have been obtained. The results show a considerable agreement with the experiment. Finally, an implementation of the calculation of the stress tensor into the quantum chemical program suite CP2K is presented. This enables MD simulations under constant pressure conditions, which is demonstrated with a series of liquid water simulations, that sheds light on the influence of the exchange-correlation functional used on the density of the simulated liquid.
Resumo:
L’invarianza spaziale dei parametri di un modello afflussi-deflussi può rivelarsi una soluzione pratica e valida nel caso si voglia stimare la disponibilità di risorsa idrica di un’area. La simulazione idrologica è infatti uno strumento molto adottato ma presenta alcune criticità legate soprattutto alla necessità di calibrare i parametri del modello. Se si opta per l’applicazione di modelli spazialmente distribuiti, utili perché in grado di rendere conto della variabilità spaziale dei fenomeni che concorrono alla formazione di deflusso, il problema è solitamente legato all’alto numero di parametri in gioco. Assumendo che alcuni di questi siano omogenei nello spazio, dunque presentino lo stesso valore sui diversi bacini, è possibile ridurre il numero complessivo dei parametri che necessitano della calibrazione. Si verifica su base statistica questa assunzione, ricorrendo alla stima dell’incertezza parametrica valutata per mezzo di un algoritmo MCMC. Si nota che le distribuzioni dei parametri risultano in diversa misura compatibili sui bacini considerati. Quando poi l’obiettivo è la stima della disponibilità di risorsa idrica di bacini non strumentati, l’ipotesi di invarianza dei parametri assume ancora più importanza; solitamente infatti si affronta questo problema ricorrendo a lunghe analisi di regionalizzazione dei parametri. In questa sede invece si propone una procedura di cross-calibrazione che viene realizzata adottando le informazioni provenienti dai bacini strumentati più simili al sito di interesse. Si vuole raggiungere cioè un giusto compromesso tra lo svantaggio derivante dall’assumere i parametri del modello costanti sui bacini strumentati e il beneficio legato all’introduzione, passo dopo passo, di nuove e importanti informazioni derivanti dai bacini strumentati coinvolti nell’analisi. I risultati dimostrano l’utilità della metodologia proposta; si vede infatti che, in fase di validazione sul bacino considerato non strumentato, è possibile raggiungere un buona concordanza tra le serie di portata simulate e osservate.