165 resultados para Fysik
Resumo:
Modern elementary particle physics is based on quantum field theories. Currently, our understanding is that, on the one hand, the smallest structures of matter and, on the other hand, the composition of the universe are based on quantum field theories which present the observable phenomena by describing particles as vibrations of the fields. The Standard Model of particle physics is a quantum field theory describing the electromagnetic, weak, and strong interactions in terms of a gauge field theory. However, it is believed that the Standard Model describes physics properly only up to a certain energy scale. This scale cannot be much larger than the so-called electroweak scale, i.e., the masses of the gauge fields W^+- and Z^0. Beyond this scale, the Standard Model has to be modified. In this dissertation, supersymmetric theories are used to tackle the problems of the Standard Model. For example, the quadratic divergences, which plague the Higgs boson mass in the Standard model, cancel in supersymmetric theories. Experimental facts concerning the neutrino sector indicate that the lepton number is violated in Nature. On the other hand, the lepton number violating Majorana neutrino masses can induce sneutrino-antisneutrino oscillations in any supersymmetric model. In this dissertation, I present some viable signals for detecting the sneutrino-antisneutrino oscillation at colliders. At the e-gamma collider (at the International Linear Collider), the numbers of the electron-sneutrino-antisneutrino oscillation signal events are quite high, and the backgrounds are quite small. A similar study for the LHC shows that, even though there are several backrounds, the sneutrino-antisneutrino oscillations can be detected. A useful asymmetry observable is introduced and studied. Usually, the oscillation probability formula where the sneutrinos are produced at rest is used. However, here, we study a general oscillation probability. The Lorentz factor and the distance at which the measurement is made inside the detector can have effects, especially when the sneutrino decay width is very small. These effects are demonstrated for a certain scenario at the LHC.
Resumo:
X-ray synchrotron radiation was used to study the nanostructure of cellulose in Norway spruce stem wood and powders of cobalt nanoparticles in cellulose support. Furthermore, the growth of metallic clusters was modelled and simulated in the mesoscopic size scale. Norway spruce was characterized with x-ray microanalysis at beamline ID18F of the European Synchrotron Radiation Facility in Grenoble. The average dimensions and the orientation of cellulose crystallites was determined using x-ray microdiffraction. In addition, the nutrient element content was determined using x-ray fluorescence spectroscopy. Diffraction patterns and fluorescence spectra were simultaneously acquired. Cobalt nanoparticles in cellulose support were characterized with x-ray absorption spectroscopy at beamline X1 of the Deutsches Elektronen-Synchrotron in Hamburg, complemented by home lab experiments including x-ray diffraction, electron microscopy and measurement of magnetic properties with a vibrating sample magnetometer. Extended x-ray absorption fine structure spectroscopy (EXAFS) and x-ray diffraction were used to solve the atomic arrangement of the cobalt nanoparticles. Scanning- and transmission electron microscopy were used to image the surfaces of the cellulose fibrils, where the growth of nanoparticles takes place. The EXAFS experiment was complemented by computational coordination number calculations on ideal spherical nanocrystals. The growth process of metallic nanoclusters on cellulose matrix is assumed to be rather complicated, affected not only by the properties of the clusters themselves, but essentially depending on the cluster-fiber interfaces as well as the morphology of the fiber surfaces. The final favored average size for nanoclusters, if such exists, is most probably a consequence of these two competing tendencies towards size selection, one governed by pore sizes, the other by the cluster properties. In this thesis, a mesoscopic model for the growth of metallic nanoclusters on porous cellulose fiber (or inorganic) surfaces is developed. The first step in modelling was to evaluate the special case of how the growth proceeds on flat or wedged surfaces.
Resumo:
Monet merkittävimmistä sään aiheuttamista haitoista liittyvät tulviin ja kuivuuteen. Tässä tutkielmassa arvioitiin sadeilmastossa kuluvan vuosisadan aikana tapahtuvia muutoksia Euroopan ja Pohjois-Atlantin kattavalla alueella kymmenen maailmanlaajuisen ilmastomallin päivittäisten sademääräsimulaatioiden pohjalta. Erityisesti huomiota kiinnitettiin suuriin vuorokausisademääriin ja toisaalta poutajaksojen esiintymiseen. Tämänkaltaisia äärisademääriin liittyviä tutkimuksia oli aiemmin tehty lähinnä alueellisten ilmastomallien tuloksiin perustuen. Alueelliset mallit kykenevät simuloimaan maailmanlaajuisia malleja paremmin pienialaisia sääilmiöitä ja lisäksi niiden avulla saadaan tarkempi kuva muutosten maantieteellisestä jakaumasta. Toisaalta alueellisten mallien tulokset riippuvat voimakkaasti malliajoissa käytetyistä maailmanlaajuiselta mallilta saaduista reunaehdoista. Näin ollen maailmanlaajuisten mallien etuna on parempi mahdollisuus käyttää useiden mallien tuloksia yhdessä, jolloin myös ilmastomallien eroista johtuvasta epävarmuudesta saadaan parempi käsitys. Mallitulosten perusteella sadeilmasto näyttäisi ilmaston lämmetessä muuttuvan äärevämmäksi. Maailmanlaajuisesti sekä rankkasateet että poutapäivät näin ollen keskimäärin yleistyvät ja poutajaksot pidentyvät, kun taas suhteellisen vähäsateisten vuorokausien lukumäärä pienenee. Muutokset eivät kuitenkaan ole kaikkialla samansuuntaisia. Euroopassa ilmasto näyttäisi muuttuvan kuivemmaksi Välimeren ympäristössä ja kesällä myös Keski-Euroopassa, kun taas Pohjois-Euroopassa sateet lisääntyvät etenkin talvella. Tällöin sadeilmaston äärevöityminen tulee esiin siten, että keskimäärin sateisemmiksi muuttuvilla alueilla rankat sateet lisääntyvät ja voimistuvat enemmän kuin mitä sademäärä keskimäärin kasvaa, ja myös monilla kuivemmiksi muuttuvillakin alueilla etenkin kaikkein rankimmat sateet voimistuvat jonkin verran. Vastaavasti poutapäivät lisääntyvät ja pisimmät poutajaksot pidentyvät paikoin sellaisillakin alueilla, joilla sademäärä keskimäärin kasvaa. Myös sademäärän vuosienvälinen vaihtelu näyttäisi tulosten perusteella jossain määrin lisääntyvän. Mallitulosten keskiarvona Pohjois-Euroopassa suurin vuosittainen vuorokausisademäärä kasvaa vuosisadan loppuun mennessä keskimäärin 17 %, mutta vuoden pisin poutajakso lyhenee vain 2 %. Keski-Euroopassa suurimmat vuorokausisademäärät kasvavat 15 % ja pisimmät poutajaksot pitenevät 22 % ja Etelä-Euroopassakin suurimmat vuorokausisademäärät kasvavat 8 %, vaikka pisimmät poutajaksot pidentyvät 35 %. Nyt saadut tulokset ovat pääosin sopusoinnussa aiempien sadeilmaston muutoksia käsittelevien tutkimustulosten kanssa. Myös verrattaessa mallien simuloimaa sadeilmastoa havaintoihin törmätään aiemmista tutkimuksista tuttuihin eroihin; ilmastomallit tuottavat todellista vähemmän sateettomia päiviä ja suurin osa malleista aliarvioi lisäksi sekä rankkasateiden esiintymistä että niiden voimakkuutta.
Resumo:
In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.
Resumo:
In remote-sensing studies, particles that are comparable to the wavelength exhibit characteristic features in electromagnetic scattering, especially in the degree of linear polarization. These features vary with the physical properties of the particles, such as shape, size, refractive index, and orientation. In the thesis, the direct problem of computing the unknown scattered quantities using the known properties of the particles and the incident radiation is solved at both optical and radar spectral regions in a unique way. The internal electromagnetic fields of wavelength-scale particles are analyzed by using both novel and established methods to show how the internal fields are related to the scattered fields in the far zone. This is achieved by using the tools and methods that were developed specifically to reveal the internal field structure of particles and to study the mechanisms that relate the structure to the scattering characteristics of those particles. It is shown that, for spherical particles, the internal field is a combination of a forward propagating wave with the apparent wavelength determined by the refractive index of the particle, and a standing wave pattern with the apparent wavelength the same as for the incident wave. Due to the surface curvature and dielectric nature of the particle, the incident wave front undergoes a phase shift, and the resulting internal wave is focused mostly at the forward part of the particle similar to an optical lens. This focusing is also seen for irregular particles. It is concluded that, for both spherical and nonspherical particles, the interference at the far field between the partial waves that originate from these concentrated areas in the particle interior, is responsible for the specific polarization features that are common for wavelength-scale particles, such as negative values and local extrema in the degree of linear polarization, asymmetry of the phase function, and enhancement of intensity near the backscattering direction. The papers presented in this thesis solve the direct problem for particles with both simple and irregular shapes to demonstrate that these interference mechanisms are common for all dielectric wavelength-scale particles. Furthermore, it is shown that these mechanisms can be applied to both regolith particles in the optical wavelengths and hydrometeors at microwave frequencies. An advantage from this kind of study is that it does not matter whether the observation is active (e.g., polarimetric radar) or passive (e.g., optical telescope). In both cases, the internal field is computed for two mutually perpendicular incident polarizations, so that the polarization characteristics can then be analyzed according to the relation between these fields and the scattered far field.
Resumo:
QCD factorization in the Bjorken limit allows to separate the long-distance physics from the hard subprocess. At leading twist, only one parton in each hadron is coherent with the hard subprocess. Higher twist effects increase as one of the active partons carries most of the longitudinal momentum of the hadron, x -> 1. In the Drell-Yan process \pi N -> \mu^- mu^+ + X, the polarization of the virtual photon is observed to change to longitudinal when the photon carries x_F > 0.6 of the pion. I define and study the Berger-Brodsky limit of Q^2 -> \infty with Q^2(1-x) fixed. A new kind of factorization holds in the Drell-Yan process in this limit, in which both pion valence quarks are coherent with the hard subprocess, the virtual photon is longitudinal rather than transverse, and the cross section is proportional to a multiparton distribution. Generalized parton distributions contain information on the longitudinal momentum and transverse position densities of partons in a hadron. Transverse charge densities are Fourier transforms of the electromagnetic form factors. I discuss the application of these methods to the QED electron, studying the form factors, charge densities and spin distributions of the leading order |e\gamma> Fock state in impact parameter and longitudinal momentum space. I show how the transverse shape of any virtual photon induced process, \gamma^*(q)+i -> f, may be measured. Qualitative arguments concerning the size of such transitions have been previously made in the literature, but without a precise analysis. Properly defined, the amplitudes and the cross section in impact parameter space provide information on the transverse shape of the transition process.
Resumo:
In order to evaluate the influence of ambient aerosol particles on cloud formation, climate and human health, detailed information about the concentration and composition of ambient aerosol particles is needed. The dura-tion of aerosol formation, growth and removal processes in the atmosphere range from minutes to hours, which highlights the need for high-time-resolution data in order to understand the underlying processes. This thesis focuses on characterization of ambient levels, size distributions and sources of water-soluble organic carbon (WSOC) in ambient aerosols. The results show that in the location of this study typically 50-60 % of organic carbon in fine particles is water-soluble. The amount of WSOC was observed to increase as aerosols age, likely due to further oxidation of organic compounds. In the boreal region the main sources of WSOC were biomass burning during the winter and secondary aerosol formation during the summer. WSOC was mainly attributed to a fine particle mode between 0.1 - 1 μm, although different size distributions were measured for different sources. The WSOC concentrations and size distributions had a clear seasonal variation. Another main focus of this thesis was to test and further develop the high-time-resolution methods for chemical characterization of ambient aerosol particles. The concentrations of the main chemical components (ions, OC, EC) of ambient aerosol particles were measured online during a year-long intensive measurement campaign conducted on the SMEAR III station in Southern Finland. The results were compared to the results of traditional filter collections in order to study sampling artifacts and limitations related to each method. To achieve better a time resolution for the WSOC and ion measurements, a particle-into-liquid sampler (PILS) was coupled with a total organic carbon analyzer (TOC) and two ion chromatographs (IC). The PILS-TOC-IC provided important data about diurnal variations and short-time plumes, which cannot be resolved from the filter samples. In summary, the measurements made for this thesis provide new information on the concentrations, size distribu-tions and sources of WSOC in ambient aerosol particles in the boreal region. The analytical and collection me-thods needed for the online characterization of aerosol chemical composition were further developed in order to provide more reliable high-time-resolution measurements.
Resumo:
Thunderstorm is a dangerous electrical phenomena in the atmosphere. Thundercloud is formed when thermal energy is transported rapidly upwards in convective updraughts. Electrification occurs in the collisions of cloud particles in the strong updraught. When the amount of charge in the cloud is large enough, electrical breakdown, better known as a flash, occurs. Lightning location is nowadays an essential tool for the detection of severe weather. Located flashes indicate in real time the movement of hazardous areas and the intensity of lightning activity. Also, an estimate for the flash peak current can be determined. The observations can be used in damage surveys. The most simple way to represent lightning data is to plot the locations on a map, but the data can be processed in more complex end-products and exploited in data fusion. Lightning data serves as an important tool also in the research of lightning-related phenomena, such as Transient Luminous Events. Most of the global thunderstorms occur in areas with plenty of heat, moisture and tropospheric instability, for example in the tropical land areas. In higher latitudes like in Finland, the thunderstorm season is practically restricted to the summer season. Particular feature of the high-latitude climatology is the large annual variation, which regards also thunderstorms. Knowing the performance of any measuring device is important because it affects the accuracy of the end-products. In lightning location systems, the detection efficiency means the ratio between located and actually occurred flashes. Because in practice it is impossible to know the true number of actually occurred flashes, the detection efficiency has to be esimated with theoretical methods.
Resumo:
Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.
Resumo:
Eddy covariance (EC)-flux measurement technique is based on measurement of turbulent motions of air with accurate and fast measurement devices. For instance, in order to measure methane flux a fast methane gas analyser is needed which measures methane concentration at least ten times in a second in addition to a sonic anemometer, which measures the three wind components with the same sampling interval. Previously measurement of methane flux was almost impossible to carry out with EC-technique due to lack of fast enough gas analysers. However during the last decade new instruments have been developed and thus methane EC-flux measurements have become more common. Performance of four methane gas analysers suitable for eddy covariance measurements are assessed in this thesis. The assessment and comparison was performed by analysing EC-data obtained during summer 2010 (1.4.-26.10.) at Siikaneva fen. The four participating methane gas analysers are TGA-100A (Campbell Scientific Inc., USA), RMT-200 (Los Gatos Research, USA), G1301-f (Picarro Inc., USA) and Prototype-7700 (LI-COR Biosciences, USA). RMT-200 functioned most reliably throughout the measurement campaign and the corresponding methane flux data had the smallest random error. In addition, methane fluxes calculated from data obtained from G1301-f and RMT-200 agree remarkably well throughout the measurement campaign. The calculated cospectra and power spectra agree well with corresponding temperature spectra. Prototype-7700 functioned only slightly over one month in the beginning of the measurement campaign and thus its accuracy and long-term performance is difficult to assess.
Resumo:
Tämän tutkielman tavoitteena oli selvittää erimittaisten sään ääri-ilmiöiden jaksojen (lämpötila ja sademäärä) ja sään ääri-ilmiöiden yhdistelmien esiintymistä sekä todennäköisyyttä Suomessa nykyisessä ilmastossa havaintojen (1950-2009) ja ilmaston sisäistä luonnollista vaihtelua kuvaavan ilmastomallikokeen, ns. Millennium0001-kontrolliajon (1200 vuotta) tulosten pohjalta. Tutkielmassa tarkastellaan sään ääri-ilmiöiden yhdistelmien esiintyvyyttä myös kirjallisuuden ja synoptisten tarkastelujen avulla sekä arvioidaan niiden esiintymisten mahdollisia muutoksia ilmaston lämmetessä. Aluksi tutkittiin, miten malliaineisto ja havainnot poikkeavat toisistaan tilastollisesti lämpötilojen ja sademäärien osalta, minkä jälkeen kontrolliajon tuottamia suureita korjattiin havaittua keskimääräistä ilmastoa vastaaviksi. Keskeiset tulokset johdettiin erimittaisille äärimmäisen kylmille tai kuumille jaksoille ja äärimmäisen sateisille jaksoille. Tutkitut kuukaudet olivat tammi- ja heinäkuu. Esimerkiksi vuorokauden keskilämpötila 25,5 °C ylittyy heinäkuussa keskimäärin suuren osan Etelä- ja Keski-Suomea peittävän hilaruudun (300 km x 300 km) alueella 0,2 %:n todennäköisyydellä, sademäärä 29,7 mm samoin 0,2 %:n todennäköisyydellä. Tammikuussa samassa hilapisteessä seitsemän vuorokauden kylmin jakso -30,2 °C ja sateisin jakso 50,1 mm toistuu kerran n. 500 vuodessa. Kuukauden keskilämpötiloille ja sademäärille tehtiin korrelaatioanalyysejä, joiden perusteella saatiin selville, että Suomessa vähäsateinen loppukevät (huhti-kesäkuu) tai pelkkä kesäkuu edesauttavat keskimääräistä lämpimämmän heinäkuun esiintymistä. Selityksenä voidaan pitää maaperän pientä kosteussisältöä. Korrelaatiot kesäkuun sademäärän ja heinäkuun keski- lämpötilojen välillä vaihtelivat -0,26 ja -0,36 välillä tilastollisen merkitsevyystason ollessa yli 99,9 %. Ilmastonmuutos muuttaa sään ääri-ilmiöiden esiintyvyyttä ja voi mahdollisesti lisätä sään ääri-ilmiöiden yhdistelmien todennäköisyyttä tai yhteiskunnan ja luonnon haavoittuvuutta ääri-ilmiöille tai niiden yhdistelmille. Ääri-ilmiöt voivat aiheuttaa mittavia vahinkoja niin yhteiskunnalle kuin luonnollekin, joten tällaisiin tapahtumiin olisi tärkeää voida varautua etukäteen mahdollisten tuhojen minimoimiseksi. Olisikin tärkeää lisätä tutkimusta tästä aiheesta.
Resumo:
Thermonuclear fusion is a sustainable energy solution, in which energy is produced using similar processes as in the sun. In this technology hydrogen isotopes are fused to gain energy and consequently to produce electricity. In a fusion reactor hydrogen isotopes are confined by magnetic fields as ionized gas, the plasma. Since the core plasma is millions of degrees hot, there are special needs for the plasma-facing materials. Moreover, in the plasma the fusion of hydrogen isotopes leads to the production of high energetic neutrons which sets demanding abilities for the structural materials of the reactor. This thesis investigates the irradiation response of materials to be used in future fusion reactors. Interactions of the plasma with the reactor wall leads to the removal of surface atoms, migration of them, and formation of co-deposited layers such as tungsten carbide. Sputtering of tungsten carbide and deuterium trapping in tungsten carbide was investigated in this thesis. As the second topic the primary interaction of the neutrons in the structural material steel was examined. As model materials for steel iron chromium and iron nickel were used. This study was performed theoretically by the means of computer simulations on the atomic level. In contrast to previous studies in the field, in which simulations were limited to pure elements, in this work more complex materials were used, i.e. they were multi-elemental including two or more atom species. The results of this thesis are in the microscale. One of the results is a catalogue of atom species, which were removed from tungsten carbide by the plasma. Another result is e.g. the atomic distributions of defects in iron chromium caused by the energetic neutrons. These microscopic results are used in data bases for multiscale modelling of fusion reactor materials, which has the aim to explain the macroscopic degradation in the materials. This thesis is therefore a relevant contribution to investigate the connection of microscopic and macroscopic radiation effects, which is one objective in fusion reactor materials research.
Resumo:
Nanoclusters are objects made up of several to thousands of atoms and form a transitional state of matter between single atoms and bulk materials. Due to their large surface-to-volume ratio, nanoclusters exhibit exciting and yet poorly studied size dependent properties. When deposited directly on bare metal surfaces, the interaction of the cluster with the substrate leads to alteration of the cluster properties, making it less or even non-functional. Surfaces modified with self-assembled monolayers (SAMs) were shown to form an interesting alternative platform, because of the possibility to control wettability by decreasing the surface reactivity and to add functionalities to pre-formed nanoclusters. In this thesis, the underlying size effects and the influence of the nanocluster environment are investigated. The emphasis is on the structural and magnetic properties of nanoclusters and their interaction with thiol SAMs. We report, for the first time, a ferromagnetic-like spin-glass behaviour of uncapped nanosized Au islands tens of nanometres in size. The flattening kinetics of the nanocluster deposition on thiol SAMs are shown to be mediated mainly by the thiol terminal group, as well as the deposition energy and the particle size distribution. On the other hand, a new mechanism for the penetration of the deposited nanoclusters through the monolayers is presented, which is fundamentally different from those reported for atom deposition on alkanethiols. The impinging cluster is shown to compress the thiol layer against the Au surface and subsequently intercalate at the thiol-Au interface. The compressed thiols try then to straighten and push the cluster away from the surface. Depending on the cluster size, this restoring force may or may not enable a covalent cluster-surface bond formation, giving rise to various cluster-surface binding patterns. Compression and straightening of the thiol molecules pinpoint the elastic nature of the SAMs, which has been investigated in this thesis using nanoindentation. The nanoindenation method has been applied to SAMs of varied tail groups, giving insight into the mechanical properties of thiol modified metal surfaces.
Resumo:
A better understanding of vacuum arcs is desirable in many of today's 'big science' projects including linear colliders, fusion devices, and satellite systems. For the Compact Linear Collider (CLIC) design, radio-frequency (RF) breakdowns occurring in accelerating cavities influence efficiency optimisation and cost reduction issues. Studying vacuum arcs both theoretically as well as experimentally under well-defined and reproducible direct-current (DC) conditions is the first step towards exploring RF breakdowns. In this thesis, we have studied Cu DC vacuum arcs with a combination of experiments, a particle-in-cell (PIC) model of the arc plasma, and molecular dynamics (MD) simulations of the subsequent surface damaging mechanism. We have also developed the 2D Arc-PIC code and the physics model incorporated in it, especially for the purpose of modelling the plasma initiation in vacuum arcs. Assuming the presence of a field emitter at the cathode initially, we have identified the conditions for plasma formation and have studied the transitions from field emission stage to a fully developed arc. The 'footing' of the plasma is the cathode spot that supplies the arc continuously with particles; the high-density core of the plasma is located above this cathode spot. Our results have shown that once an arc plasma is initiated, and as long as energy is available, the arc is self-maintaining due to the plasma sheath that ensures enhanced field emission and sputtering. The plasma model can already give an estimate on how the time-to-breakdown changes with the neutral evaporation rate, which is yet to be determined by atomistic simulations. Due to the non-linearity of the problem, we have also performed a code-to-code comparison. The reproducibility of plasma behaviour and time-to-breakdown with independent codes increased confidence in the results presented here. Our MD simulations identified high-flux, high-energy ion bombardment as a possible mechanism forming the early-stage surface damage in vacuum arcs. In this mechanism, sputtering occurs mostly in clusters, as a consequence of overlapping heat spikes. Different-sized experimental and simulated craters were found to be self-similar with a crater depth-to-width ratio of about 0.23 (sim) - 0.26 (exp). Experiments, which we carried out to investigate the energy dependence of DC breakdown properties, point at an intrinsic connection between DC and RF scaling laws and suggest the possibility of accumulative effects influencing the field enhancement factor.