966 resultados para NS-Verfolgte
Resumo:
Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.
Resumo:
A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications.
Resumo:
We have studied the nonlinear optical properties of nanolayered Se/As2S3 film with a modulation period of 10 nm and a total thickness of 1.15 mu m at two [1064 nm (8 ns) and 800 nm (20 ps)] wavelengths using the standard Z-scan technique. Three-photon absorption was observed at off-resonant excitation and saturation of two-photon absorption at quasiresonant excitation. The observation of the saturation of two-photon absorption is because the pulse duration is shorter than the thermalization time of the photocreated carriers in their bands and three-photon absorption is due to high excitation irradiance. (c) 2007 American Institute of Physics.
Resumo:
A three-level space phasor generation scheme with common mode elimination and with reduced power device count is proposed for an open end winding induction motor in this paper. The open end winding induction motor is fed by the three-level inverters from both sides. Each two level inverter is formed by cascading two two-level inverters. By sharing the bottom inverter for the two three-level inverters on either side, the power device count is reduced. The switching states with zero common mode voltage variation are selected for PWM switching so that there is no alternating common mode voltage in the pole voltages as well as in phase voltages. Only two isolated DC-links, with half the voltage rating of a conventional three-level neutral point clamped inverter, are needed for the proposed scheme.
Local numerical modelling of magnetoconvection and turbulence - implications for mean-field theories
Resumo:
During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.
Resumo:
New stars in galaxies form in dense, molecular clouds of the interstellar medium. Measuring how the mass is distributed in these clouds is of crucial importance for the current theories of star formation. This is because several open issues in them, such as the strength of different mechanism regulating star formation and the origin of stellar masses, can be addressed using detailed information on the cloud structure. Unfortunately, quantifying the mass distribution in molecular clouds accurately over a wide spatial and dynamical range is a fundamental problem in the modern astrophysics. This thesis presents studies examining the structure of dense molecular clouds and the distribution of mass in them, with the emphasis on nearby clouds that are sites of low-mass star formation. In particular, this thesis concentrates on investigating the mass distributions using the near infrared dust extinction mapping technique. In this technique, the gas column densities towards molecular clouds are determined by examining radiation from the stars that shine through the clouds. In addition, the thesis examines the feasibility of using a similar technique to derive the masses of molecular clouds in nearby external galaxies. The papers presented in this thesis demonstrate how the near infrared dust extinction mapping technique can be used to extract detailed information on the mass distribution in nearby molecular clouds. Furthermore, such information is used to examine characteristics crucial for the star formation in the clouds. Regarding the use of extinction mapping technique in nearby galaxies, the papers of this thesis show that deriving the masses of molecular clouds using the technique suffers from strong biases. However, it is shown that some structural properties can still be examined with the technique.
Resumo:
Time-dependent backgrounds in string theory provide a natural testing ground for physics concerning dynamical phenomena which cannot be reliably addressed in usual quantum field theories and cosmology. A good, tractable example to study is the rolling tachyon background, which describes the decay of an unstable brane in bosonic and supersymmetric Type II string theories. In this thesis I use boundary conformal field theory along with random matrix theory and Coulomb gas thermodynamics techniques to study open and closed string scattering amplitudes off the decaying brane. The calculation of the simplest example, the tree-level amplitude of n open strings, would give us the emission rate of the open strings. However, even this has been unknown. I will organize the open string scattering computations in a more coherent manner and will argue how to make further progress.
Resumo:
Peptidyl-tRNA hydrolase cleaves the ester bond between tRNA and the attached peptide in peptidyl-tRNA in order to avoid the toxicity resulting from its accumulation and to free the tRNA available for further rounds in protein synthesis. The structure of the enzyme from Mycobacteritan tuberculosis has been determined in three crystal forms. This structure and the structure of the enzyme frorn Escherichia coli in its crystal differ substantially on account of the binding of the C terminus of the E. coli enzyme to the peptide-binding site of a neighboring molecule in the crystal. A detailed examination of this difference led to an elucidation of the plasticity of the binding site of the enzyme. The peptide-binding site of the enzyme is a cleft between the body, of the molecule and a polypepticle Y stretch involving a loop and a helix. This stretch is in the open conformation when the enzyme is in the free state as in the crystals of M. tuberculosis peptidyl-tRNA hydrolase. Furthermore, there is no physical continuity between the tRNA and the peptide-binding sites. The molecule in the E. coli crystal mimics the peptide-bound enzyme molecule. The peptide stretch referred to earlier now closes on the bound peptide. Concurrently, a channel connecting the tRNA and the peptide-binding site opens primarily through the concerted movement of two residues. Thus, the crystal structure of M. tuberculosis peptidyl-tRNA hydrolase when compared with the crystal structure of the E. coli enzyme, leads to a model of structural changes associated with enzyme action on the basis of the plasticity of the molecule. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We report the synthesis of Cd-substituted ZnO nanostructures (Zn1-xCdxO with x up to approximate to 0.09) by the high-pressure solution growth method. The synthesized nanostructures comprise nanocrystals that are both particles (similar to 10-15 nm) and rods which grow along the [002] direction as established by transmission electron microscope (TEM) and x-ray diffraction (XRD) analysis. Rietveld analysis of the XRD data shows a monotonic increase of the unit cell volume with the increase of Cd concentration. The optical absorption, as well as the photoluminescence (PL), shows a red shift on Cd substitution. The line width of the PL spectrum is related to the strain inhomogeneity and it peaks in the region where the CdO phase separates from the Zn1-xCdxO nanostructures. The time-resolved photoemission showed a long-lived (similar to 10 ns) component. We propose that the PL behaviour of the Zn1-xCdxO is dominated by strain in the sample with the red shift of the PL linked to the expansion of the unit cell volume on Cd substitution.
Resumo:
This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.
Resumo:
Knowledge of the physical properties of asteroids is crucial in many branches of solar-system research. Knowledge of the spin states and shapes is needed, e.g., for accurate orbit determination and to study the history and evolution of the asteroids. In my thesis, I present new methods for using photometric lightcurves of asteroids in the determination of their spin states and shapes. The convex inversion method makes use of a general polyhedron shape model and provides us at best with an unambiguous spin solution and a convex shape solution that reproduces the main features of the original shape. Deriving information about the non-convex shape features is, in principle, also possible, but usually requires a priori information about the object. Alternatively, a distribution of non-convex solutions, describing the scale of the non-convexities, is also possible to be obtained. Due to insufficient number of absolute observations and inaccurately defined asteroid phase curves, the $c/b$-ratio, i.e., the flatness of the shape model is often somewhat ill-defined. However, especially in the case of elongated objects, the flatness seems to be quite well constrained, even in the case when only relative lightcurves are available. The results prove that it is, contrary to the earlier misbelief, possible to derive shape information from the lightcurve data if a sufficiently wide range of observing geometries is covered by the observations. Along with the more accurate shape models, also the rotational states, i.e., spin vectors and rotation periods, are defined with improved accuracy. The shape solutions obtained so far reveal a population of irregular objects whose most descriptive shape characteristics, however, can be expressed with only a few parameters. Preliminary statistical analyses for the shapes suggests that there are correlations between shape and other physical properties, such as the size, rotation period and taxonomic type of the asteroids. More shape data of, especially, the smallest and largest asteroids, as well as the fast and slow rotators is called for in order to be able to study the statistics more thoroughly.
Resumo:
A large proportion of our knowledge about the surfaces of atmosphereless solar-system bodies is obtained through remote-sensing measurements. The measurements can be carried out either as ground-based telescopic observations or space-based observations from orbiting spacecraft. In both cases, the measurement geometry normally varies during the observations due to the orbital motion of the target body, the spacecraft, etc.. As a result, the data are acquired over a variety of viewing and illumination angles. Surfaces of planetary bodies are usually covered with a layer of loose, broken-up rock material called the regolith whose physical properties affect the directional dependence of remote-sensed measurements. It is of utmost importance for correct interpretation of the remote-sensed data to understand the processes behind this alteration. In the thesis, the multi-angular effects that the physical properties of the regolith have on remote-sensing measurements are studied in two regimes of electromagnetic radiation, visible to near infrared and soft X-rays. These effects are here termed generally the regolith effects in remote sensing. Although the physical mechanisms that are important in these regions are largely different, notable similarities arise in the methodology that is used in the study of the regolith effects, including the characterization of the regolith both in experimental studies and in numerical simulations. Several novel experimental setups have been constructed for the thesis. Alongside the experimental work, theoretical modelling has been carried out, and results from both approaches are presented. Modelling of the directional behaviour of light scattered from a regolith is utilized to obtain shape and spin-state information of several asteroids from telescopic observations and to assess the surface roughness and single-scattering properties of lunar maria from spacecraft observations. One of the main conclusions is that the azimuthal direction is an important factor in detailed studies of planetary surfaces. In addition, even a single parameter, such as porosity, can alter the light scattering properties of a regolith significantly. Surface roughness of the regolith is found to alter the elemental fluorescence line ratios of a surface obtained through planetary soft X-ray spectrometry. The results presented in the thesis are among the first to report this phenomenon. Regolith effects need to be taken into account in the analysis of remote-sensed data, providing opportunities for retrieving physical parameters of the surface through inverse methods.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This is a qualitative and multimethodological comparative study, which consists of two main parts: examining the development of new media and analysing and comparing the new media strategies of the three companies studied (Alma Media, Sanoma and the Finnish Broadcasting Company Yleisradio). The study includes the first large-scale review in Finnish of the development of new media, paying attention to the birth of the Internet as well as to mobile media, web TV and any other element of new media. It also concentrates on the function of electronic distribution channels before the age of the Internet, e.g. cable text and videotext. Answers about how the three traditional Finnish media houses began spreading their content to the Internet and wireless applications in 1994–2004 are also given. In researching the new media strategies the study pays special attention to the attitudes that the three media companies adopted towards the Internet and other forms of new media in their strategies during the years in question. By analysing and comparing, e.g., the companies’ strategies and their investments, the study ascertains whether the companies had a joint functional model in adopting new media or acted totally on their own without taking too much notice of the media field overall. The study makes extensive use of previously published material. The researcher has also interviewed almost twenty people who were involved in getting the companies’ new media functions under way. The methods for the interviews were dialogue and snowball sampling. The researcher has created a classification in which he divides the business strategies into four different categories: active strategy, careful strategy, permissive strategy, and passive strategy. In comparing and analysing the companies the researcher has used the classification devised by Allan Afuah & Christopher L. Tucci. The seven element classification consists of dominant managerial logic, competency trap, fear of cannibalisation and loss of revenue, channel conflict, political power, co-opetitor power and emotional attachment. In analysing the company strategies the researcher has also noted the classifications of convergence made by Everette E. Dennis and Graham Murdock as well as the aspects formulated by Sylvia Chan-Olmsted and Louisa Ha concerning the success of the companies in adopting the Internet into their functions. Based on all these classifications and by further developing them the researcher analyses and compares the success of the new media strategies of the three Finnish companies. The outcome of the study is a conclusion as to what kind of strategies the companies have carried out their new media functions and how they have succeeded in it.
Resumo:
The aims of the thesis are (1) to present a systematic evaluation of generation and its relevance as a sociological concept, (2) to reflect on how generational consciousness, i.e. generation as an object of collective identification that has social significance, can emerge and take shape, (3) to analyze empirically the generational experiences and consciousness of one specific generation, namely Finnish baby boomers (b. 1945 1950). The thesis contributes to the discussion on the social (as distinct from its genealogical) meaning of the concept of generation, launched by Karl Mannheim s classic Das Problem der Generationen (1928), in which the central idea is that a certain group of people is bonded together by a shared experience and that this bonding can result in a distinct self-consciousness. The thesis is comprised of six original articles and an extensive summarizing chapter. In the empirical articles, the baby boomers are studied on the basis of nationally representative survey data (N = 2628) and narrative life-story interviews (N = 38). In the article that discusses the connection of generations and social movements, the analysis is based on the member survey of Attac Finland (N = 1096). Three main themes were clarified in the thesis. (1) In the social sense the concept of generation is a modern, problematic, and ultimately a political concept. It served the interests of the intellectuals who developed the concept in the early 20th century and provided them, as an alternative to the concept of social class, a new way of think about social change and progress. The concept of generation is always coupled with the concept of Zeitgeist or some other controversial way of defining what is essential, i.e. what creates generations, in a given culture. Thus generation is, as a product of definition and classification struggles, a contested concept. The concept also clearly implies elitist connotations; the idea of some kind of vanguard (the elite) that represents an entire generation by proclaiming itself as its spokesman automatically creates a counterpart, namely the others in the peer group who are thought to be represented (the masses). (2) Generational consciousness cannot emerge as a result of any kind of automatic process or endogenously; it must be made. There has to be somebody who represents the generation in order for that generation to exist in people s minds and as an object of identification; generational experiences and their meanings must be articulated. Hence, social generations are, in a fundamental manner, discursively constructed. The articulations of generational experiences (speeches, writings, manifests, labels etc.) can be called as the discursive dimension of social generations, and through this notion, how public discourse shapes people s generational consciousness can be seen. Another important element in the process is collective memory, as generational consciousness often takes form only retrospectively. (3) Finnish baby boomers are not a united or homogeneous generation but are divided into many smaller sections with specific generational experiences and consciousnesses. The content of the generational consciousness of the baby boomers is heavily politically charged. A salient dividing line inside the age group is formed by individual attitudes towards so-called 1960s radicalism. Identification with the 1960s generation functions today as a positive self-definition of a certain small leftist elite group, and the values and characteristics usually connected with the idea of the 1960s generation do not represent the whole age group. On the contrary, among some of the members of the baby boomers, the generational identification is still directed by the experience of how traditional values were disgraced in the 1960s. As objects of identification, the neutral term baby boomers and the charged 1960s generation are totally different things, and therefore they should not be used as synonyms. Although the significance of the group of the 1960s generation is often overestimated, they are however special with respect to generational consciousness because they have presented themselves as the voice of the entire generation. Their generational interpretations have spread through the media with the help of certain iconic images of the generation insomuch that 1960s radicalism has become an indirect generational experience for other parts of the baby boom cohort as well.