303 resultados para praktisk fysik


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earth s ice shelves are mainly located in Antarctica. They cover about 44% of the Antarctic coastline and are a salient feature of the continent. Antarctic ice shelf melting (AISM) removes heat from and inputs freshwater into the adjacent Southern Ocean. Although playing an important role in the global climate, AISM is one of the most important components currently absent in the IPCC climate model. In this study, AISM is introduced into a global sea ice-ocean climate model ORCA2-LIM, following the approach of Beckmann and Goosse (2003; BG03) for the thermodynamic interaction between the ice shelf and ocean. This forms the model ORCA2-LIM-ISP (ISP: ice shelf parameterization), in which not only all the major Antarctic ice shelves but also a number of minor ice shelves are included. Using these two models, ORCA2-LIM and ORCA2-LIM-ISP, the impact of addition of AISM and increasing AISM have been investigated. Using the ORCA2-LIM model, numerical experiments are performed to investigate the sensitivity of the polar sea ice cover and the Antarctic Circumpolar Current (ACC) transport through Drake Passage (DP) to the variations of three sea ice parameters, namely the thickness of newly formed ice in leads (h0), the compressive strength of ice (P*), and the turning angle in the oceanic boundary layer beneath sea ice (θ). It is found that the magnitudes of h0 and P* have little impact on the seasonal sea ice extent, but lead to large changes in the seasonal sea ice volume. The variation in turning angle has little impact on the sea ice extent and volume in the Arctic but tends to reduce them in the Antarctica when ignored. The magnitude of P* has the least impact on the DP transport, while the other two parameters have much larger influences. Numerical results from ORCA2-LIM and ORCA2-LIM-ISP are analyzed to investigate how the inclusion of AISM affects the representation of the Southern Ocean hydrography. Comparisons with data from the World Ocean Circulation Experiment (WOCE) show that the addition of AISM significantly improves the simulated hydrography. It not only warms and freshens the originally too cold and too saline bottom water (AABW), but also warms and enriches the salinity of the originally too cold and too fresh warm deep water (WDW). Addition of AISM also improves the simulated stratification. The close agreement between the simulation with AISM and the observations suggests that the applied parameterization is an adequate way to include the effect of AISM in a global sea ice-ocean climate model. We also investigate the models capability to represent the sea ice-ocean system in the North Atlantic Ocean and the Arctic regions. Our study shows both models (with and without AISM) can successfully reproduce the main features of the sea ice-ocean system. However, both tend to overestimate the ice flux through the Nares Strait, produce a lower temperature and salinity in the Hudson Bay, Baffin Bay and Davis Strait, and miss the deep convection in the Labrador Sea. These deficiencies are mainly attributed to the artificial enlargement of the Nares Strait in the model. In this study, the impact of increasing AISM on the global sea ice-ocean system is thoroughly investigated. This provides a first idea regarding changes induced by increasing AISM. It is shown that the impact of increasing AISM is global and most significant in the Southern Ocean. There, increasing AISM tends to freshen the surface water, to warm the intermediate and deep waters, and to freshen and warm the bottom water. In addition, increasing AISM also leads to changes in the mixed layer depths (MLD) in the deep convection sites in the Southern Ocean, deepening in the Antarctic continental shelf while shoaling in the ACC region. Furthermore, increasing AISM influences the current system in the Southern Ocean. It tends to weaken the ACC, and strengthen the Antarctic coastal current (ACoC) as well as the Weddell Gyre and the Ross Gyre. In addition to the ocean system, increasing AISM also has a notable impact on the Antarctic sea ice cover. Due to the cooling of seawater, sea ice concentration and thickness generally become higher. In austral winter, noticeable increases in sea ice concentration mainly take place near the ice edge. In regards with sea ice thickness, large increases are mainly found along the coast of the Weddell Sea, the Bellingshausen and Amundsen Seas, and the Ross Sea. The overall thickening of sea ice leads to a larger volume of sea ice in Antarctica. In the North Atlantic, increasing AISM leads to remarkable changes in temperature, salinity and density. The water generally becomes warmer, more saline and denser. The most significant warming occurs in the subsurface layer. In contrast, the maximum salinity increase is found at the surface. In addition, the MLD becomes larger along the Greenland-Scotland-Iceland ridge. Global teleconnections due to AISM are studied. The AISM signal is transported with the surface current: the additional freshwater from AISM tends to enhance the northward spreading of the surface water. As a result, more warm and saline water is transported from the tropical region to the North Atlantic Ocean, resulting in warming and salt enrichment there. It would take about 30 40 years to establish a systematic noticeable change in temperature, salinity and MLD in the North Atlantic Ocean according to this study. The changes in hydrography due to increasing AISM are compared with observations. Consistency suggests that increasing AISM is highly likely a major contributor to the recent observed changes in the Southern Ocean. In addition, the AISM might contribute to the salinity contrast between the North Atlantic and North Pacific, which is important for the global thermohaline circulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Superfluidity is perhaps one of the most remarkable observed macroscopic quantum effect. Superfluidity appears when a macroscopic number of particles occupies a single quantum state. Using modern experimental techniques one dark solitons) and vortices. There is a large literature on theoretical work studying the properties of such solitons using semiclassical methods. This thesis describes an alternative method for the study of superfluid solitons. The method used here is a holographic duality between a class of quantum field theories and gravitational theories. The classical limit of the gravitational system maps into a strong coupling limit of the quantum field theory. We use a holographic model of superfluidity to study solitons in these systems. One particularly appealing feature of this technique is that it allows us to take into account finite temperature effects in a large range of temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectroscopy can provide valuable information on the structure of disordered matter beyond that which is available through e.g. x-ray and neutron diffraction. X-ray Raman scattering is a non-resonant element-sensitive process which allows bulk-sensitive measurements of core-excited spectra from light-element samples. In this thesis, x-ray Raman scattering is used to study the local structure of hydrogen-bonded liquids and solids, including liquid water, a series of linear and branched alcohols, and high-pressure ice phases. Connecting the spectral features to the local atomic-scale structure involves theoretical references, and in the case of hydrogen-bonded systems the interpretation of the spectra is currently actively debated. The systematic studies of the intra- and intermolecular effects in alcohols, non-hydrogen-bonded neighbors in high-pressure ices, and the effect of temperature in liquid water are used to demonstrate different aspects of the local structure that can influence the near-edge spectra. Additionally, the determination of the extended x-ray absorption fine structure is addressed in a momentum-transfer dependent study. This work demonstrates the potential of x-ray Raman scattering for unique studies of the local structure of a variety of disordered light-element systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis the current status and some open problems of noncommutative quantum field theory are reviewed. The introduction aims to put these theories in their proper context as a part of the larger program to model the properties of quantized space-time. Throughout the thesis, special focus is put on the role of noncommutative time and how its nonlocal nature presents us with problems. Applications in scalar field theories as well as in gauge field theories are presented. The infinite nonlocality of space-time introduced by the noncommutative coordinate operators leads to interesting structure and new physics. High energy and low energy scales are mixed, causality and unitarity are threatened and in gauge theory the tools for model building are drastically reduced. As a case study in noncommutative gauge theory, the Dirac quantization condition of magnetic monopoles is examined with the conclusion that, at least in perturbation theory, it cannot be fulfilled in noncommutative space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern elementary particle physics is based on quantum field theories. Currently, our understanding is that, on the one hand, the smallest structures of matter and, on the other hand, the composition of the universe are based on quantum field theories which present the observable phenomena by describing particles as vibrations of the fields. The Standard Model of particle physics is a quantum field theory describing the electromagnetic, weak, and strong interactions in terms of a gauge field theory. However, it is believed that the Standard Model describes physics properly only up to a certain energy scale. This scale cannot be much larger than the so-called electroweak scale, i.e., the masses of the gauge fields W^+- and Z^0. Beyond this scale, the Standard Model has to be modified. In this dissertation, supersymmetric theories are used to tackle the problems of the Standard Model. For example, the quadratic divergences, which plague the Higgs boson mass in the Standard model, cancel in supersymmetric theories. Experimental facts concerning the neutrino sector indicate that the lepton number is violated in Nature. On the other hand, the lepton number violating Majorana neutrino masses can induce sneutrino-antisneutrino oscillations in any supersymmetric model. In this dissertation, I present some viable signals for detecting the sneutrino-antisneutrino oscillation at colliders. At the e-gamma collider (at the International Linear Collider), the numbers of the electron-sneutrino-antisneutrino oscillation signal events are quite high, and the backgrounds are quite small. A similar study for the LHC shows that, even though there are several backrounds, the sneutrino-antisneutrino oscillations can be detected. A useful asymmetry observable is introduced and studied. Usually, the oscillation probability formula where the sneutrinos are produced at rest is used. However, here, we study a general oscillation probability. The Lorentz factor and the distance at which the measurement is made inside the detector can have effects, especially when the sneutrino decay width is very small. These effects are demonstrated for a certain scenario at the LHC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X-ray synchrotron radiation was used to study the nanostructure of cellulose in Norway spruce stem wood and powders of cobalt nanoparticles in cellulose support. Furthermore, the growth of metallic clusters was modelled and simulated in the mesoscopic size scale. Norway spruce was characterized with x-ray microanalysis at beamline ID18F of the European Synchrotron Radiation Facility in Grenoble. The average dimensions and the orientation of cellulose crystallites was determined using x-ray microdiffraction. In addition, the nutrient element content was determined using x-ray fluorescence spectroscopy. Diffraction patterns and fluorescence spectra were simultaneously acquired. Cobalt nanoparticles in cellulose support were characterized with x-ray absorption spectroscopy at beamline X1 of the Deutsches Elektronen-Synchrotron in Hamburg, complemented by home lab experiments including x-ray diffraction, electron microscopy and measurement of magnetic properties with a vibrating sample magnetometer. Extended x-ray absorption fine structure spectroscopy (EXAFS) and x-ray diffraction were used to solve the atomic arrangement of the cobalt nanoparticles. Scanning- and transmission electron microscopy were used to image the surfaces of the cellulose fibrils, where the growth of nanoparticles takes place. The EXAFS experiment was complemented by computational coordination number calculations on ideal spherical nanocrystals. The growth process of metallic nanoclusters on cellulose matrix is assumed to be rather complicated, affected not only by the properties of the clusters themselves, but essentially depending on the cluster-fiber interfaces as well as the morphology of the fiber surfaces. The final favored average size for nanoclusters, if such exists, is most probably a consequence of these two competing tendencies towards size selection, one governed by pore sizes, the other by the cluster properties. In this thesis, a mesoscopic model for the growth of metallic nanoclusters on porous cellulose fiber (or inorganic) surfaces is developed. The first step in modelling was to evaluate the special case of how the growth proceeds on flat or wedged surfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I detta arbete undersöks naturvetenskaplig kunskap ur ett filosofiskt perspektiv. Arbetets centrala ämne är det naturvetenskapliga experimenterandets praktiska aspekter och deras anknytningar till den teoretiska naturvetenskapliga kunskapen. Ämnet behandlas med hjälp av Hans-Georg Gadamers hermeneutiska filosofi i boken Wahrheit und Methode. Gadamers utgångspunkt i denna bok är att hermeneutiken beskriver vad som händer i texttolkning då en tolkare förstår en skriven text, men han antyder också att hermeneutiken har vidare tillämpningsområden, till och med att den universalt beskriver alla former av mänsklig förståelse. Syftet med detta arbete är att undersöka hur Gadamers hermeneutik kan tillämpas på naturvetenskaplig förståelse och vilka ändringar man då måste införliva i hermeneutiken samt att skissa grunddragen för en hermeneutisk modell av naturvetenskaplig kunskap. I arbetet presenteras först de centrala elementen i Gadamers hermeneutiska filosofi, speciellt föromdömens positiva inverkan på tolkandet och den verkningshistoriska effekten bakom varje tolkning. Sedan diskuteras bristerna i Gadamers argument för hermeneutikens universalitet, speciellt hans starka betoning av språket som grunden för all förståelse. Om Gadamers hermeneutik skall kunna tillämpas på naturvetenskapens område måste även observationer kunna vara föremål för tolkning. De dominanta trenderna i vetenskapsfilosofin då Gadamer skrev Wahrheit und Methode är en förklaring till att Gadamer starkt förbinder sig till textförståelsen och oftast diskuterar den naturvetenskapliga kunskapen endast som en negativ motpol till den humanvetenskapliga. Vetenskapsfilosofins utveckling efter Wahrheit und Methode, speciellt det faktum att fullständig objektivitet inte mera anses vara ett realistiskt ideal, höjer förväntningarna på hermeneutikens värde för naturvetenskapernas filosofi. Till följande diskuteras tre bidrag till naturvetenskapernas filosofi som är speciellt värdefulla för en hermeneutiken. Den första av dessa är Sylvain Brombergers analys av ovisshet i vetenskapliga undersökningar. Bromberger hävdar att vetenskaplig forskning riktas mot frågor vars svar forskaren inte kan föreställa sig på förhand. Brombergers analys tyder på att det finns en hermeneutisk dialektik av frågor och svar i naturvetenskapliga experiment där forskaren strävar till att göra observationer för att besvara sina frågor. Det andra bidraget är Michael Polanyis studie av den praktiska och personliga kunskapens betydelse för all vetenskaplig forskning. Här är speciellt det vetenskapliga språkets utveckling i samband med experimentering av intresse. Det tredje bidraget är Robert Creases studie av experimentella observationer som visar att observationstolkning innefattar många element som kan förknippas med hermeneutiken. Analysen av de tre ovannämnda studierna leder till följande slutsatser. Den naturvetenskapliga forskningens hermeneutiska element förväntas vara starkt framför allt i experimentell forskning där forskaren utreder nya frågor i en kontinuerlig dialog av tekniskt prövande, praktisk identifiering och språklig beskrivning av nya fenomen. Speciellt beaktansvärd är skillnaden mellan verifierande och upptäckande naturvetenskapliga undersökningar. Tidigare resultat kan verifieras med hjälp av mätningar som specificeras på förhand, men ingen kan på förhand specificera de tekniska och språkliga steg som krävs för ett framgångsrikt upptäckande experiment. Experimentella upptäckter kräver därför en praktisk kunskap som inte kan reduceras till ren teori eller metodologi. Denna praktiska kunskap kan beskrivas med hjälp av Gadamers hermeneutik. Till sist diskuteras vissa grundläggande drag som kan sägas vara karakteristiska för en hermeneutisk modell av naturvetenskaplig kunskap. Speciellt den naturvetenskapliga kunskapens historiska natur, fördomarnas betydelse i naturvetenskaplig forskning samt vikten av forskarens personliga perspektiv i den naturvetenskapliga observationen är viktiga stöttestenar som gör att den hermeneutiska naturvetenskapsmodellen skiljer sig betydligt från modeller som betonar naturvetenskapernas objektivitet. Men det hermeneutiska perspektivet utesluter inte nödvändigtvis andra perspektiv. Dess huvudsakliga nytta är att den tillåter oss att undersöka sidor av naturvetenskaplig kunskap som objektivitetsbetonande modeller lämnar alltför mycket i skymundan. Till dessa hör speciellt det praktiska arbetet i naturvetenskaplig experimentering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monet merkittävimmistä sään aiheuttamista haitoista liittyvät tulviin ja kuivuuteen. Tässä tutkielmassa arvioitiin sadeilmastossa kuluvan vuosisadan aikana tapahtuvia muutoksia Euroopan ja Pohjois-Atlantin kattavalla alueella kymmenen maailmanlaajuisen ilmastomallin päivittäisten sademääräsimulaatioiden pohjalta. Erityisesti huomiota kiinnitettiin suuriin vuorokausisademääriin ja toisaalta poutajaksojen esiintymiseen. Tämänkaltaisia äärisademääriin liittyviä tutkimuksia oli aiemmin tehty lähinnä alueellisten ilmastomallien tuloksiin perustuen. Alueelliset mallit kykenevät simuloimaan maailmanlaajuisia malleja paremmin pienialaisia sääilmiöitä ja lisäksi niiden avulla saadaan tarkempi kuva muutosten maantieteellisestä jakaumasta. Toisaalta alueellisten mallien tulokset riippuvat voimakkaasti malliajoissa käytetyistä maailmanlaajuiselta mallilta saaduista reunaehdoista. Näin ollen maailmanlaajuisten mallien etuna on parempi mahdollisuus käyttää useiden mallien tuloksia yhdessä, jolloin myös ilmastomallien eroista johtuvasta epävarmuudesta saadaan parempi käsitys. Mallitulosten perusteella sadeilmasto näyttäisi ilmaston lämmetessä muuttuvan äärevämmäksi. Maailmanlaajuisesti sekä rankkasateet että poutapäivät näin ollen keskimäärin yleistyvät ja poutajaksot pidentyvät, kun taas suhteellisen vähäsateisten vuorokausien lukumäärä pienenee. Muutokset eivät kuitenkaan ole kaikkialla samansuuntaisia. Euroopassa ilmasto näyttäisi muuttuvan kuivemmaksi Välimeren ympäristössä ja kesällä myös Keski-Euroopassa, kun taas Pohjois-Euroopassa sateet lisääntyvät etenkin talvella. Tällöin sadeilmaston äärevöityminen tulee esiin siten, että keskimäärin sateisemmiksi muuttuvilla alueilla rankat sateet lisääntyvät ja voimistuvat enemmän kuin mitä sademäärä keskimäärin kasvaa, ja myös monilla kuivemmiksi muuttuvillakin alueilla etenkin kaikkein rankimmat sateet voimistuvat jonkin verran. Vastaavasti poutapäivät lisääntyvät ja pisimmät poutajaksot pidentyvät paikoin sellaisillakin alueilla, joilla sademäärä keskimäärin kasvaa. Myös sademäärän vuosienvälinen vaihtelu näyttäisi tulosten perusteella jossain määrin lisääntyvän. Mallitulosten keskiarvona Pohjois-Euroopassa suurin vuosittainen vuorokausisademäärä kasvaa vuosisadan loppuun mennessä keskimäärin 17 %, mutta vuoden pisin poutajakso lyhenee vain 2 %. Keski-Euroopassa suurimmat vuorokausisademäärät kasvavat 15 % ja pisimmät poutajaksot pitenevät 22 % ja Etelä-Euroopassakin suurimmat vuorokausisademäärät kasvavat 8 %, vaikka pisimmät poutajaksot pidentyvät 35 %. Nyt saadut tulokset ovat pääosin sopusoinnussa aiempien sadeilmaston muutoksia käsittelevien tutkimustulosten kanssa. Myös verrattaessa mallien simuloimaa sadeilmastoa havaintoihin törmätään aiemmista tutkimuksista tuttuihin eroihin; ilmastomallit tuottavat todellista vähemmän sateettomia päiviä ja suurin osa malleista aliarvioi lisäksi sekä rankkasateiden esiintymistä että niiden voimakkuutta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In remote-sensing studies, particles that are comparable to the wavelength exhibit characteristic features in electromagnetic scattering, especially in the degree of linear polarization. These features vary with the physical properties of the particles, such as shape, size, refractive index, and orientation. In the thesis, the direct problem of computing the unknown scattered quantities using the known properties of the particles and the incident radiation is solved at both optical and radar spectral regions in a unique way. The internal electromagnetic fields of wavelength-scale particles are analyzed by using both novel and established methods to show how the internal fields are related to the scattered fields in the far zone. This is achieved by using the tools and methods that were developed specifically to reveal the internal field structure of particles and to study the mechanisms that relate the structure to the scattering characteristics of those particles. It is shown that, for spherical particles, the internal field is a combination of a forward propagating wave with the apparent wavelength determined by the refractive index of the particle, and a standing wave pattern with the apparent wavelength the same as for the incident wave. Due to the surface curvature and dielectric nature of the particle, the incident wave front undergoes a phase shift, and the resulting internal wave is focused mostly at the forward part of the particle similar to an optical lens. This focusing is also seen for irregular particles. It is concluded that, for both spherical and nonspherical particles, the interference at the far field between the partial waves that originate from these concentrated areas in the particle interior, is responsible for the specific polarization features that are common for wavelength-scale particles, such as negative values and local extrema in the degree of linear polarization, asymmetry of the phase function, and enhancement of intensity near the backscattering direction. The papers presented in this thesis solve the direct problem for particles with both simple and irregular shapes to demonstrate that these interference mechanisms are common for all dielectric wavelength-scale particles. Furthermore, it is shown that these mechanisms can be applied to both regolith particles in the optical wavelengths and hydrometeors at microwave frequencies. An advantage from this kind of study is that it does not matter whether the observation is active (e.g., polarimetric radar) or passive (e.g., optical telescope). In both cases, the internal field is computed for two mutually perpendicular incident polarizations, so that the polarization characteristics can then be analyzed according to the relation between these fields and the scattered far field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

QCD factorization in the Bjorken limit allows to separate the long-distance physics from the hard subprocess. At leading twist, only one parton in each hadron is coherent with the hard subprocess. Higher twist effects increase as one of the active partons carries most of the longitudinal momentum of the hadron, x -> 1. In the Drell-Yan process \pi N -> \mu^- mu^+ + X, the polarization of the virtual photon is observed to change to longitudinal when the photon carries x_F > 0.6 of the pion. I define and study the Berger-Brodsky limit of Q^2 -> \infty with Q^2(1-x) fixed. A new kind of factorization holds in the Drell-Yan process in this limit, in which both pion valence quarks are coherent with the hard subprocess, the virtual photon is longitudinal rather than transverse, and the cross section is proportional to a multiparton distribution. Generalized parton distributions contain information on the longitudinal momentum and transverse position densities of partons in a hadron. Transverse charge densities are Fourier transforms of the electromagnetic form factors. I discuss the application of these methods to the QED electron, studying the form factors, charge densities and spin distributions of the leading order |e\gamma> Fock state in impact parameter and longitudinal momentum space. I show how the transverse shape of any virtual photon induced process, \gamma^*(q)+i -> f, may be measured. Qualitative arguments concerning the size of such transitions have been previously made in the literature, but without a precise analysis. Properly defined, the amplitudes and the cross section in impact parameter space provide information on the transverse shape of the transition process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to evaluate the influence of ambient aerosol particles on cloud formation, climate and human health, detailed information about the concentration and composition of ambient aerosol particles is needed. The dura-tion of aerosol formation, growth and removal processes in the atmosphere range from minutes to hours, which highlights the need for high-time-resolution data in order to understand the underlying processes. This thesis focuses on characterization of ambient levels, size distributions and sources of water-soluble organic carbon (WSOC) in ambient aerosols. The results show that in the location of this study typically 50-60 % of organic carbon in fine particles is water-soluble. The amount of WSOC was observed to increase as aerosols age, likely due to further oxidation of organic compounds. In the boreal region the main sources of WSOC were biomass burning during the winter and secondary aerosol formation during the summer. WSOC was mainly attributed to a fine particle mode between 0.1 - 1 μm, although different size distributions were measured for different sources. The WSOC concentrations and size distributions had a clear seasonal variation. Another main focus of this thesis was to test and further develop the high-time-resolution methods for chemical characterization of ambient aerosol particles. The concentrations of the main chemical components (ions, OC, EC) of ambient aerosol particles were measured online during a year-long intensive measurement campaign conducted on the SMEAR III station in Southern Finland. The results were compared to the results of traditional filter collections in order to study sampling artifacts and limitations related to each method. To achieve better a time resolution for the WSOC and ion measurements, a particle-into-liquid sampler (PILS) was coupled with a total organic carbon analyzer (TOC) and two ion chromatographs (IC). The PILS-TOC-IC provided important data about diurnal variations and short-time plumes, which cannot be resolved from the filter samples. In summary, the measurements made for this thesis provide new information on the concentrations, size distribu-tions and sources of WSOC in ambient aerosol particles in the boreal region. The analytical and collection me-thods needed for the online characterization of aerosol chemical composition were further developed in order to provide more reliable high-time-resolution measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thunderstorm is a dangerous electrical phenomena in the atmosphere. Thundercloud is formed when thermal energy is transported rapidly upwards in convective updraughts. Electrification occurs in the collisions of cloud particles in the strong updraught. When the amount of charge in the cloud is large enough, electrical breakdown, better known as a flash, occurs. Lightning location is nowadays an essential tool for the detection of severe weather. Located flashes indicate in real time the movement of hazardous areas and the intensity of lightning activity. Also, an estimate for the flash peak current can be determined. The observations can be used in damage surveys. The most simple way to represent lightning data is to plot the locations on a map, but the data can be processed in more complex end-products and exploited in data fusion. Lightning data serves as an important tool also in the research of lightning-related phenomena, such as Transient Luminous Events. Most of the global thunderstorms occur in areas with plenty of heat, moisture and tropospheric instability, for example in the tropical land areas. In higher latitudes like in Finland, the thunderstorm season is practically restricted to the summer season. Particular feature of the high-latitude climatology is the large annual variation, which regards also thunderstorms. Knowing the performance of any measuring device is important because it affects the accuracy of the end-products. In lightning location systems, the detection efficiency means the ratio between located and actually occurred flashes. Because in practice it is impossible to know the true number of actually occurred flashes, the detection efficiency has to be esimated with theoretical methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.