26 resultados para third-order non-linearity
em Helda - Digital Repository of University of Helsinki
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.
Resumo:
The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image coordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface
Resumo:
Inflation is a period of accelerated expansion in the very early universe, which has the appealing aspect that it can create primordial perturbations via quantum fluctuations. These primordial perturbations have been observed in the cosmic microwave background, and these perturbations also function as the seeds of all large-scale structure in the universe. Curvaton models are simple modifications of the standard inflationary paradigm, where inflation is driven by the energy density of the inflaton, but another field, the curvaton, is responsible for producing the primordial perturbations. The curvaton decays after inflation as ended, where the isocurvature perturbations of the curvaton are converted into adiabatic perturbations. Since the curvaton must decay, it must have some interactions. Additionally realistic curvaton models typically have some self-interactions. In this work we consider self-interacting curvaton models, where the self-interaction is a monomial in the potential, suppressed by the Planck scale, and thus the self-interaction is very weak. Nevertheless, since the self-interaction makes the equations of motion non-linear, it can modify the behaviour of the model very drastically. The most intriguing aspect of this behaviour is that the final properties of the perturbations become highly dependent on the initial values. Departures of Gaussian distribution are important observables of the primordial perturbations. Due to the non-linearity of the self-interacting curvaton model and its sensitivity to initial conditions, it can produce significant non-Gaussianity of the primordial perturbations. In this work we investigate the non-Gaussianity produced by the self-interacting curvaton, and demonstrate that the non-Gaussianity parameters do not obey the analytically derived approximate relations often cited in the literature. Furthermore we also consider a self-interacting curvaton with a mass in the TeV-scale. Motivated by realistic particle physics models such as the Minimally Supersymmetric Standard Model, we demonstrate that a curvaton model within the mass range can be responsible for the observed perturbations if it can decay late enough.
Resumo:
A better understanding of vacuum arcs is desirable in many of today's 'big science' projects including linear colliders, fusion devices, and satellite systems. For the Compact Linear Collider (CLIC) design, radio-frequency (RF) breakdowns occurring in accelerating cavities influence efficiency optimisation and cost reduction issues. Studying vacuum arcs both theoretically as well as experimentally under well-defined and reproducible direct-current (DC) conditions is the first step towards exploring RF breakdowns. In this thesis, we have studied Cu DC vacuum arcs with a combination of experiments, a particle-in-cell (PIC) model of the arc plasma, and molecular dynamics (MD) simulations of the subsequent surface damaging mechanism. We have also developed the 2D Arc-PIC code and the physics model incorporated in it, especially for the purpose of modelling the plasma initiation in vacuum arcs. Assuming the presence of a field emitter at the cathode initially, we have identified the conditions for plasma formation and have studied the transitions from field emission stage to a fully developed arc. The 'footing' of the plasma is the cathode spot that supplies the arc continuously with particles; the high-density core of the plasma is located above this cathode spot. Our results have shown that once an arc plasma is initiated, and as long as energy is available, the arc is self-maintaining due to the plasma sheath that ensures enhanced field emission and sputtering. The plasma model can already give an estimate on how the time-to-breakdown changes with the neutral evaporation rate, which is yet to be determined by atomistic simulations. Due to the non-linearity of the problem, we have also performed a code-to-code comparison. The reproducibility of plasma behaviour and time-to-breakdown with independent codes increased confidence in the results presented here. Our MD simulations identified high-flux, high-energy ion bombardment as a possible mechanism forming the early-stage surface damage in vacuum arcs. In this mechanism, sputtering occurs mostly in clusters, as a consequence of overlapping heat spikes. Different-sized experimental and simulated craters were found to be self-similar with a crater depth-to-width ratio of about 0.23 (sim) - 0.26 (exp). Experiments, which we carried out to investigate the energy dependence of DC breakdown properties, point at an intrinsic connection between DC and RF scaling laws and suggest the possibility of accumulative effects influencing the field enhancement factor.
Resumo:
In the study, the potential allowable cut in the district of Pohjois-Savo - based on the non-industrial private forest landowners' (NIPF) choices of timber management strategies - was clarified. Alternative timber management strategies were generated, and the choices and factors affecting the choices of timber management strategies by NIPF landowners were studied. The choices of timber management strategies were solved by maximizing the utility functions of the NIPF landowners. The parameters of the utility functions were estimated using the Analytic Hierarchy Process (AHP). The level of the potential allowable cut was compared to the cutting budgets based on the 7th and 8th National Forest Inventories (NFI7 and NFI8), to the combining of private forestry plans, and to the realized drain from non-industrial private forests. The potential allowable cut was calculated using the same MELA system as has been used in the calculation of the national cutting budget. The data consisted of the NIPF holdings (from the TASO planning system) that had been inventoried compartmentwise and had forestry plans made during the years 1984-1992. The NIPF landowners' choices of timber management strategies were clarified by a two-phase mail inquiry. The most preferred strategy obtained was "sustainability" (chosen by 62 % of landowners). The second in order of preference was "finance" (17 %) and the third was "saving" (11 %). "No cuttings", and "maximum cuttings" were the least preferred (9 % and 1 %, resp.). The factors promoting the choices of strategies with intensive cuttings were a) "farmer as forest owner" and "owning fields", b) "increase in the size of the forest holding", c) agriculture and forestry orientation in production, d) "decreasing short term stumpage earning expectations", e) "increasing intensity of future cuttings", and f) "choice of forest taxation system based on site productivity". The potential allowable cut defined in the study was 20 % higher than the average of the realized drain during the years 1988-1993, which in turn, was at the same level as the cutting budget based on the combining of forestry plans in eastern Finland. Respectively, the potential allowable cut defined in the study was 12 % lower than the NFI8-based greatest sustained allowable cut for the 1990s. Using the method presented in this study, timber management strategies can be clarified for non-industrial private forest landowners in different parts of Finland. Based on the choices of timber managemet strategies, regular cutting budgets can be calculated more realistically than before.
Resumo:
Using audio-recorded data from cognitive-constructivist psychotherapy, the article shows a particular institutional context in which successful professional action does not adhere to the pattern of affective neutrality which Parsons saw as an inherent component of medicine and psychotherapy. In our data, the professional’s non-neutrality functions as a tool for achieving institutional goals. The analysis focuses on the psychotherapist’s actions that convey a critical stance towards a third party with whom the patient has experienced problems. The data analysis revealed two practices of this kind of critique: (1) the therapist can confirm the critique that the patient has expressed or (2) return to the critique from which the patient has focused away. These actions are shown to build grounds for the therapist’s further actions that challenge the patient’s dysfunctional beliefs. The article suggests that in the case of psychotherapy, actions that as such might be seen as apparent lapses from the neutral professional role can in their specific context perform the task of the institution at hand.
Resumo:
This dissertation studies the language of Latin letters that were written in Egypt and Vindolanda (in northern Britain) during the period 1st century BC 3rd century AD on papyri, ostraca, and wooden tablets. The majority of the texts is, in one way or another, connected with the Roman army. The focus of the study is on syntax and pragmatics. Besides traditional philological methods, modern syntactic theory is used as well, especially in the pragmatic analysis. The study begins with a critical survey of certain concepts that are current in the research on the Latin language, most importantly the concept of vulgar Latin , which, it is argued, seems to be used as an abstract noun for variation and change in Latin . Further, it is necessary to treat even the non-literary material primarily as written texts and not as straightforward reflections of spoken language. An examination of letter phraseology shows that there is considerable variation between the two major geographical areas of provenance. Latin letter writing in Egypt was influenced by Greek. The study highlights the importance of seeing the letters as a text type, with recurring phraseological elements appearing in the body text as well. It is argued that recognising these elements is essential for the correct analysis of the syntax. Three areas of syntax are discussed in detail: sentence connection (mainly parataxis), syntactically incoherent structures and word order (the order of the object and the verb). For certain types of sentence connection we may plausibly posit an origin in spoken Latin, but for many other linguistic phenomena attested in this material the issue of spoken Latin is anything but simple. Concerning the study of historical syntax, the letters offer information about the changing status of the accusative case. Incoherent structures may reflect contaminations in spoken language but usually the reason for them is the inability of the writer to put his thoughts into writing, especially when there is something more complicated to be expressed. Many incoherent expressions reflect the need to start the predication with a thematic constituent. Latin word order is seen as resulting from an interaction of syntactic and pragmatic factors. The preference for an order where the topic is placed sentence-initially can be seen in word order more generally as well. Furthermore, there appears a difference between Egypt and Vindolanda. The letters from Vindolanda show the order O(bject) V(erb) clearly more often than the letters from Egypt. Interestingly, this difference correlates with another, namely the use of the anaphoric pronoun is. This is an interesting observation in view of the fact that both of these are traditional Latin features, as opposed to those that foreshadow the Romance development (VO order and use of the anaphoric ille). However, it is difficult to say whether this is an indication of social or regional variation.
Resumo:
In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.
Resumo:
The purpose of this study is to describe the development of application of mass spectrometry for the structural analyses of non-coding ribonucleic acids during past decade. Mass spectrometric methods are compared of traditional gel electrophoretic methods, the characteristics of performance of mass spectrometric, analyses are studied and the future trends of mass spectrometry of ribonucleic acids are discussed. Non-coding ribonucleic acids are short polymeric biomolecules which are not translated to proteins, but which may affect the gene expression in all organisms. Regulatory ribonucleic acids act through transient interactions with key molecules in signal transduction pathways. Interactions are mediated through specific secondary and tertiary structures. Posttranscriptional modifications in the structures of molecules may introduce new properties to the organism, such as adaptation to environmental changes or development of resistance to antibiotics. In the scope of this study, the structural studies include i) determination of the sequence of nucleobases in the polymer chain, ii) characterisation and localisation of posttranscriptional modifications in nucleobases and in the backbone structure, iii) identification of ribonucleic acid-binding molecules and iv) probing of higher order structures in the ribonucleic acid molecule. Bacteria, archaea, viruses and HeLa cancer cells have been used as target organisms. Synthesised ribonucleic acids consisting of structural regions of interest have been frequently used. Electrospray ionisation (ESI) and matrix-assisted laser desorption ionisation (MALDI) have been used for ionisation of ribonucleic analytes. Ammonium acetate and 2-propanol are common solvents for ESI. Trihydroxyacetophenone is the optimal MALDI matrix for ionisation of ribonucleic acids and peptides. Ammonium salts are used in ESI buffers and MALDI matrices as additives to remove cation adducts. Reverse phase high performance liquid chromatography has been used for desalting and fractionation of analytes either off-line of on-line, coupled with ESI source. Triethylamine and triethylammonium bicarbonate are used as ion pair reagents almost exclusively. Fourier transform ion cyclotron resonance analyser using ESI coupled with liquid chromatography is the platform of choice for all forms of structural analyses. Time-of-flight (TOF) analyser using MALDI may offer sensitive, easy-to-use and economical solution for simple sequencing of longer oligonucleotides and analyses of analyte mixtures without prior fractionation. Special analysis software is used for computer-aided interpretation of mass spectra. With mass spectrometry, sequences of 20-30 nucleotides of length may be determined unambiguously. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Sequencing in conjunction with other structural studies enables accurate localisation and characterisation of posttranscriptional modifications and identification of nucleobases and amino acids at the sites of interaction. High throughput screening methods for RNA-binding ligands have been developed. Probing of the higher order structures has provided supportive data for computer-generated three dimensional models of viral pseudoknots. In conclusion. mass spectrometric methods are well suited for structural analyses of small species of ribonucleic acids, such as short non-coding ribonucleic acids in the molecular size region of 20-30 nucleotides. Structural information not attainable with other methods of analyses, such as nuclear magnetic resonance and X-ray crystallography, may be obtained with the use of mass spectrometry. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Ligand screening may be used in the search of possible new therapeutic agents. Demanding assay design and challenging interpretation of data requires multidisclipinary knowledge. The implement of mass spectrometry to structural studies of ribonucleic acids is probably most efficiently conducted in specialist groups consisting of researchers from various fields of science.
Resumo:
This thesis is a study of a rather new logic called dependence logic and its closure under classical negation, team logic. In this thesis, dependence logic is investigated from several aspects. Some rules are presented for quantifier swapping in dependence logic and team logic. Such rules are among the basic tools one must be familiar with in order to gain the required intuition for using the logic for practical purposes. The thesis compares Ehrenfeucht-Fraïssé (EF) games of first order logic and dependence logic and defines a third EF game that characterises a mixed case where first order formulas are measured in the formula rank of dependence logic. The thesis contains detailed proofs of several translations between dependence logic, team logic, second order logic and its existential fragment. Translations are useful for showing relationships between the expressive powers of logics. Also, by inspecting the form of the translated formulas, one can see how an aspect of one logic can be expressed in the other logic. The thesis makes preliminary investigations into proof theory of dependence logic. Attempts focus on finding a complete proof system for a modest yet nontrivial fragment of dependence logic. A key problem is identified and addressed in adapting a known proof system of classical propositional logic to become a proof system for the fragment, namely that the rule of contraction is needed but is unsound in its unrestricted form. A proof system is suggested for the fragment and its completeness conjectured. Finally, the thesis investigates the very foundation of dependence logic. An alternative semantics called 1-semantics is suggested for the syntax of dependence logic. There are several key differences between 1-semantics and other semantics of dependence logic. 1-semantics is derived from first order semantics by a natural type shift. Therefore 1-semantics reflects an established semantics in a coherent manner. Negation in 1-semantics is a semantic operation and satisfies the law of excluded middle. A translation is provided from unrestricted formulas of existential second order logic into 1-semantics. Also game theoretic semantics are considerd in the light of 1-semantics.
Resumo:
In order to fully understand the process of European integration it is of paramount importance to consider developments at the sub-national and local level. EU integration scholars shifted their attention to the local level only at the beginning of the 1990s with the concept of multi-level governance (MLG). While MLG is the first concept to scrutinise the position of local levels of public administration and other actors within the EU polity, I perceive it as too optimistic in the degree of influence it ascribes to local levels. Thus, learning from and combining MLG with other concepts, such as structural constructivism, helps to reveal some of the hidden aspects of EU integration and paint a more realistic picture of multi-level interaction. This thesis also answers the call for more case studies in order to conceptualise MLG further. After a critical study of theories and concepts of European integration, above all, MLG, I will analyse sub-national and local government in Finland and Germany. I show how the sub-national level and local governments are embedded in the EU s multi-level structure of governance and how, through EU integration, those levels have been empowered but also how their scope of action has partially decreased. After theoretical and institutional contextualisation, I present the results of my empirical study of the EU s Community Initiative LEADER+. LEADER stands for Liaison Entre Actions de Développement de l'Économie Rurale , and aims at improving the economic conditions in Europe s rural areas. I was interested in how different actors construct and shape EU financed rural development, especially in how local actors organised in so-called local action groups (LAGs) cooperate with other administrative units within the LEADER+ administrative chain. I also examined intra-institutional relations within those groups, in order to find out who are the most influential and powerful actors within them. Empirical data on the Finnish and German LAGs was first gathered through a survey, which was then supplemented and completed by interviewing LAG members, LAG-managers, several civil servants from Finnish and German decision-making and managing authorities and a civil servant from the EU Commission. My main argument is that in both Germany and Finland, the Community Initiative LEADER+ offered a space for multi-level interaction and local-level involvement, a space that on the one hand consists of highly motivated people actively contributing to the improvement of the quality of life and economy in Europe s countryside but which is dependent and also restricted by national administrative practices, implementation approaches and cultures on the other. In Finland, the principle of tri-partition (kolmikantaperiaatte) in organising the executive committees of LAGs is very noticeable. In comparison to Germany, for instance, the representation of public administration in those committees is much more limited due to this principle. Furthermore, the mobilisation of local residents and the bringing together of actors from the local area with different social and institutional backgrounds to become an active part of LEADER+ was more successful in Finland than in Germany. Tri-partition as applied in Finland should serve as a model for similar policies in other EU member states. EU integration changed the formal and informal inter-institutional relations linking the different levels of government. The third sector including non-governmental institutions and interest groups gained access to policy-making processes and increasingly interact with government institutions at all levels of public administration. These developments do not necessarily result in the empowering of the local level.
Resumo:
This thesis is a collection of three essays on Bangladeshi microcredit. One of the essays examines the effect of microcredit on the cost of crime. The other two analyze the functioning mechanism of microcredit programs, i.e. credit allocation rules and credit recovery policy. In Essay 1, the demand for microcredit and its allocation rules is studied. Microcredit is claimed to be the most effective means of supplying credit to the poorest of the poor in rural Bangladesh. This fact has not yet been examined among households who demand microcredit. The results of this essay show that educated households are more likely to demand microcredit and its demand does not differ by sex. The results also show that microcredit programs follow different credit allocation rules for male and female applicants. Education is an essential characteristic for both sexes that credit programs consider in allocating credit. In Essay 2, the focus is to establish a link between microcredit and the incidence of rural crime in Bangladesh. The basic hypothesis is that microcredit programs jointly hold the group responsibility which provides an incentive for group members to protect each other from criminal gang in order to safeguard their own economic interests. The key finding of this essay is that the average cost of crime for non-borrowers is higher than that for borrowers. In particular, 10% increase in the credit reduces the costs of crime by 4.2%. The third essay analyzes the reasons of high repayment rate amid Bangladeshi microcredit programs. The existing literature argues that credit applicants are able to screen out the high risk applicants in the group formulation stage using their superior local information. In addition, due to the joint liability mechanism of the programs, group members monitor each others economic activities to ensure the minimal misuse of credit. The arguments in the literature are based on the assumption that once the credit is provided, credit programs have no further role in ensuring that repayments are honored by the group. In contrast, using survey data this essay documents that credit programs use in addition organizational pressures such as humiliation and harassment the non-payer to recover the unpaid installments. The results also show that the group mechanisms do not have a significant effect in recovering default dues.
Resumo:
Climate change contributes directly or indirectly to changes in species distributions, and there is very high confidence that recent climate warming is already affecting ecosystems. The Arctic has already experienced the greatest regional warming in recent decades, and the trend is continuing. However, studies on the northern ecosystems are scarce compared to more southerly regions. Better understanding of the past and present environmental change is needed to be able to forecast the future. Multivariate methods were used to explore the distributional patterns of chironomids in 50 shallow (≤ 10m) lakes in relation to 24 variables determined in northern Fennoscandia at the ecotonal area from the boreal forest in the south to the orohemiarctic zone in the north. Highest taxon richness was noted at middle elevations around 400 m a.s.l. Significantly lower values were observed from cold lakes situated in the tundra zone. Lake water alkalinity had the strongest positive correlation with the taxon richness. Many taxa had preference for lakes either on tundra area or forested area. The variation in the chironomid abundance data was best correlated with sediment organic content (LOI), lake water total organic carbon content, pH and air temperature, with LOI being the strongest variable. Three major lake groups were separated on the basis of their chironomid assemblages: (i) small and shallow organic-rich lakes, (ii) large and base-rich lakes, and (iii) cold and clear oligotrophic tundra lakes. Environmental variables best discriminating the lake groups were LOI, taxon richness, and Mg. When repeated, this kind of an approach could be useful and efficient in monitoring the effects of global change on species ranges. Many species of fast spreading insects, including chironomids, show a remarkable ability to track environmental changes. Based on this ability, past environmental conditions have been reconstructed using their chitinous remains in the lake sediment profiles. In order to study the Holocene environmental history of subarctic aquatic systems, and quantitatively reconstruct the past temperatures at or near the treeline, long sediment cores covering the last 10000 years (the Holocene) were collected from three lakes. Lower temperature values than expected based on the presence of pine in the catchment during the mid-Holocene were reconstructed from a lake with great water volume and depth. The lake provided thermal refuge for profundal, cold adapted taxa during the warm period. In a shallow lake, the decrease in the reconstructed temperatures during the late Holocene may reflect the indirect response of the midges to climate change through, e.g., pH change. The results from three lakes indicated that the response of chironomids to climate have been more or less indirect. However, concurrent shifts in assemblages of chironomids and vegetation in two lakes during the Holocene time period indicated that the midges together with the terrestrial vegetation had responded to the same ultimate cause, which most likely was the Holocene climate change. This was also supported by the similarity in the long-term trends in faunal succession for the chironomid assemblages in several lakes in the area. In northern Finnish Lapland the distribution of chironomids were significantly correlated with physical and limnological factors that are most likely to change as a result of future climate change. The indirect and individualistic response of aquatic systems, as reconstructed using the chironomid assemblages, to the climate change in the past suggests that in the future, the lake ecosystems in the north do not respond in one predictable way to the global climate change. Lakes in the north may respond to global climate change in various ways that are dependent on the initial characters of the catchment area and the lake.
Resumo:
This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.
Resumo:
The first observations of solar X-rays date back to late 1940 s. In order to observe solar X-rays the instruments have to be lifted above the Earth s atmosphere, since all high energy radiation from the space is almost totally attenuated by it. This is a good thing for all living creatures, but bad for X-ray astronomers. Detectors observing X-ray emission from space must be placed on-board satellites, which makes this particular discipline of astronomy technologically and operationally demanding, as well as very expensive. In this thesis, I have focused on detectors dedicated to observing solar X-rays in the energy range 1-20 keV. The purpose of these detectors was to measure solar X-rays simultaneously with another X-ray spectrometer measuring fluorescence X-ray emission from the Moon surface. The X-ray fluorescence emission is induced by the primary solar X-rays. If the elemental abundances on the Moon were to be determined with fluorescence analysis methods, the shape and intensity of the simultaneous solar X-ray spectrum must be known. The aim of this thesis is to describe the characterization and operation of our X-ray instruments on-board two Moon missions, SMART-1 and Chandrayaan-1. Also the independent solar science performance of these two almost similar X-ray spectrometers is described. These detectors have the following two features in common. Firstly, the primary detection element is made of a single crystal silicon diode. Secondly, the field of view is circular and very large. The data obtained from these detectors are spectra with a 16 second time resolution. Before launching an instrument into space, its performance must be characterized by ground calibrations. The basic operation of these detectors and their ground calibrations are described in detail. Two C-flares are analyzed as examples for introducing the spectral fitting process. The first flare analysis shows the fit of a single spectrum of the C1-flare obtained during the peak phase. The other analysis example shows how to derive the time evolution of fluxes, emission measures (EM) and temperatures through the whole single C4 flare with the time resolution of 16 s. The preparatory data analysis procedures are also introduced in detail. These are required in spectral fittings of the data. A new solar monitor design equipped with a concentrator optics and a moderate size of field of view is also introduced.