35 resultados para AM1 calculation
Resumo:
Atmospheric aerosol particles affect the global climate as well as human health. In this thesis, formation of nanometer sized atmospheric aerosol particles and their subsequent growth was observed to occur all around the world. Typical formation rate of 3 nm particles at varied from 0.01 to 10 cm-3s-1. One order of magnitude higher formation rates were detected in urban environment. Highest formation rates up to 105 cm-3s-1 were detected in coastal areas and in industrial pollution plumes. Subsequent growth rates varied from 0.01 to 20 nm h-1. Smallest growth rates were observed in polar areas and the largest in the polluted urban environment. This was probably due to competition between growth by condensation and loss by coagulation. Observed growth rates were used in the calculation of a proxy condensable vapour concentration and its source rate in vastly different environments from pristine Antarctica to polluted India. Estimated concentrations varied only 2 orders of magnitude, but the source rates for the vapours varied up to 4 orders of magnitude. Highest source rates were in New Delhi and lowest were in the Antarctica. Indirect methods were applied to study the growth of freshly formed particles in the atmosphere. Also a newly developed Water Condensation Particle Counter, TSI 3785, was found to be a potential candidate to detect water solubility and thus indirectly composition of atmospheric ultra-fine particles. Based on indirect methods, the relative roles of sulphuric acid, non-volatile material and coagulation were investigated in rural Melpitz, Germany. Condensation of non-volatile material explained 20-40% and sulphuric acid the most of the remaining growth up to a point, when nucleation mode reached 10 to 20 nm in diameter. Coagulation contributed typically less than 5%. Furthermore, hygroscopicity measurements were applied to detect the contribution of water soluble and insoluble components in Athens. During more polluted days, the water soluble components contributed more to the growth. During less anthropogenic influence, non-soluble compounds explained a larger fraction of the growth. In addition, long range transport to a measurement station in Finland in a relatively polluted air mass was found to affect the hygroscopicity of the particles. This aging could have implications to cloud formation far away from the pollution sources.
Resumo:
Time-dependent backgrounds in string theory provide a natural testing ground for physics concerning dynamical phenomena which cannot be reliably addressed in usual quantum field theories and cosmology. A good, tractable example to study is the rolling tachyon background, which describes the decay of an unstable brane in bosonic and supersymmetric Type II string theories. In this thesis I use boundary conformal field theory along with random matrix theory and Coulomb gas thermodynamics techniques to study open and closed string scattering amplitudes off the decaying brane. The calculation of the simplest example, the tree-level amplitude of n open strings, would give us the emission rate of the open strings. However, even this has been unknown. I will organize the open string scattering computations in a more coherent manner and will argue how to make further progress.
Resumo:
This thesis deals with theoretical modeling of the electrodynamics of auroral ionospheres. In the five research articles forming the main part of the thesis we have concentrated on two main themes: Development of new data-analysis techniques and study of inductive phenomena in the ionospheric electrodynamics. The introductory part of the thesis provides a background for these new results and places them in the wider context of ionospheric research. In this thesis we have developed a new tool (called 1D SECS) for analysing ground based magnetic measurements from a 1-dimensional magnetometer chain (usually aligned in the North-South direction) and a new method for obtaining ionospheric electric field from combined ground based magnetic measurements and estimated ionospheric electric conductance. Both these methods are based on earlier work, but contain important new features: 1D SECS respects the spherical geometry of large scale ionospheric electrojet systems and due to an innovative way of implementing boundary conditions the new method for obtaining electric fields can be applied also at local scale studies. These new calculation methods have been tested using both simulated and real data. The tests indicate that the new methods are more reliable than the previous techniques. Inductive phenomena are intimately related to temporal changes in electric currents. As the large scale ionospheric current systems change relatively slowly, in time scales of several minutes or hours, inductive effects are usually assumed to be negligible. However, during the past ten years, it has been realised that induction can play an important part in some ionospheric phenomena. In this thesis we have studied the role of inductive electric fields and currents in ionospheric electrodynamics. We have formulated the induction problem so that only ionospheric electric parameters are used in the calculations. This is in contrast to previous studies, which require knowledge of the magnetospheric-ionosphere coupling. We have applied our technique to several realistic models of typical auroral phenomena. The results indicate that inductive electric fields and currents are locally important during the most dynamical phenomena (like the westward travelling surge, WTS). In these situations induction may locally contribute up to 20-30% of the total ionospheric electric field and currents. Inductive phenomena do also change the field-aligned currents flowing between the ionosphere and magnetosphere, thus modifying the coupling between the two regions.
Resumo:
In recent decades, nation-states have become major stakeholders in nonhuman genetic resource networks as a result of several international treaties. The most important of these is the juridically binding international Convention on Biological Diversity (CBD), signed at the Rio Earth Summit in 1992 by some 150 nations. This convention was a watershed for the identification of global rights related to genetic resources in recognising the sovereign power of signatory nations over their natural resources. The contracting parties are legally obliged to identify their native genetic material and to take legislative, administrative, and/or policy measures to foster research on genetic resources. In this process of global bioprospecting in the name of biodiversity conservation, the world's nonhuman genetic material is to be indexed according to nation and nationality. This globally legitimated process of native genetic identification inscribes national identity into nature and flesh. As a consequence, this new form of potential national biowealth forms also what could be called novel nonhuman genetic nationhoods. These national corporealities are produced in tactical and strategic encounters of the political and the scientific, in new spaces crafted through technical and institutional innovation, and between the national reconfiguration of the natural and cultural as framed by international political agreements. This work follows the creation of national genetic resources in one of the biodiversity-poor countries of the North, Finland. The thesis is an ethnographic work addressing the calculation of life: practices of identifying, evaluating, and collecting nonhuman life in national genetic programmes. The core of the thesis is about observations made within the Finnish Genetic Resources Programmes in 2004 2008, gathered via multi-sited ethnography and related methods derived from the anthropology of science. The thesis explores the problematic relations of the communal forms of human and nonhuman life in an increasingly technoscientific contemporaneity the co-production and coexistence of human and nonhuman life in biopolitical formations called nations.
Resumo:
The thesis examines urban issues arising from the transformation from state socialism to a market economy. The main topics are residential differentiation, i.e., uneven spatial distribution of social groups across urban residential areas, and the effects of housing policy and town planning on urban development. The case study is development in Tallinn, the capital city of Estonia, in the context of development of Central and Eastern European cities under and after socialism. The main body of the thesis consists of four separately published refereed articles. The research question that brings the articles together is how the residential (socio-spatial) pattern of cities developed during the state socialist period and how and why that pattern has changed since the transformation to a market economy began. The first article reviews the literature on residential differentiation in Budapest, Prague, Tallinn and Warsaw under state socialism from the viewpoint of the role of housing policy in the processes of residential differentiation at various stages of the socialist era. The paper shows how the socialist housing provision system produced socio-occupational residential differentiation directly and indirectly and it describes how the residential patterns of these cities developed. The second article is critical of oversimplified accounts of rapid reorganisation of the overall socio-spatial pattern of post-socialist cities and of claims that residential mobility has had a straightforward role in it. The Tallinn case study, consisting of an analysis of the distribution of socio-economic groups across eight city districts and over four housing types in 1999 as well as examining the role of residential mobility in differentiation during the 1990s, provides contrasting evidence. The third article analyses the role and effects of housing policies in Tallinn s residential differentiation. The focus is on contemporary post-privatisation housing-policy measures and their effects. The article shows that the Estonian housing policies do not even aim to reduce, prevent or slow down the harmful effects of the considerable income disparities that are manifest in housing inequality and residential differentiation. The fourth article examines the development of Tallinn s urban planning system 1991-2004 from the viewpoint of what means it has provided the city with to intervene in urban development and how the city has used these tools. The paper finds that despite some recent progress in planning, its role in guiding where and how the city actually developed has so far been limited. Tallinn s urban development is rather initiated and driven by private agents seeking profit from their investment in land. The thesis includes original empirical research in the three articles that analyse development since socialism. The second article employs quantitative data and methods, primarily index calculation, whereas the third and the fourth ones draw from a survey of policy documents combined with interviews with key informants. Keywords: residential differentiation, housing policy, urban planning, post-socialist transformation, Estonia, Tallinn
Resumo:
The evacuation of Finnish children to Sweden during WW II has often been called a small migration . Historical research on this subject is scarce, considering the great number of children involved. The present research has applied, apart from the traditional archive research, the framework of history-culture developed by Rüsen in order to have an all-inclusive approach to the impact of this historical event. The framework has three dimensions: political, aesthetic and cognitive. The collective memory of war children has also been discussed. The research looks for political factors involved in the evacuations during the Winter War and the Continuation War and the post-war period. The approach is wider than a purely humanitarian one. Political factors have had an impact in both Finland and Sweden, beginning from the decision-making process and ending with the discussion of the unexpected consequences of the evacuations in the Finnish Parliament in 1950. The Winter War (30.11.1939 13.3.1940) witnessed the first child transports. These were also the model for future decision making. The transports were begun on the initiative of Swedes Maja Sandler, the wife of the resigned minister of foreign affairs Rickard Sandler, and Hanna Rydh-Munck af Rosenschöld , but this activity was soon accepted by the Swedish government because the humanitarian help in the form of child transports lightened the political burden of Prime Minister Hansson, who was not willing to help Finland militarily. It was help that Finland never asked for and it was rejected at the beginning. The negative response of Minister Juho Koivisto was not taken very seriously. The political forces in Finland supporting child transports were stronger than those rejecting them. The major politicians in support belonged to Finland´s Swedish minority. In addition, close to 1 000 Finnish children remained in Sweden after the Winter War. No analysis was made of the reasons why these children did not return home. A committee set up to help Finland and Norway was established in Sweden in 1941. Its chairman was Torsten Nothin, an influential Swedish politician. In December 1941 he appealed to the Swedish government to provide help to Finnish children under the authority of The International Red Cross. This plea had no results. The delivery of great amounts of food to Finland, which was now at war with Great Britain, had automatically caused reactions among the allies against the Swedish imports through Gothenburg. This included the import of oil, which was essential for the Swedish navy and air force. Oil was later used successfully to force a reduction in commerce between Sweden and Finland. The contradiction between Sweden´s essential political interests and humanitarian help was solved in a way that did not harm the country´s vital political interests. Instead of delivering help to Finland, Finnish children were transported to Sweden through the organisations that had already been created. At the beginning of the Continuation War (25.6.1941 27.4.1945) negative opinion regarding child transports re-emerged in Finland. Karl-August Fagerholm implemented the transports in September 1941. In 1942, members of the conservative parties in the Finnish Parliament expressed their fear of losing the children to the Swedes. They suggested that Finland should withdraw from the inter-Nordic agreement, according to which the adoptions were approved by the court of the country where the child resided. This initiative failed. Paavo Virkkunen, an influential member of the conservative party Kokoomus in Finland, favoured the so-called good-father system, where help was delivered to Finland in the form of money and goods. Virkkunen was concerned about the consequences of a long stay in a Swedish family. The risk of losing the children was clear. The extreme conservative party (IKL, the Patriotic Movement of the Finnish People) wanted to alienate Finland from Sweden and bring Finland closer to Germany. Von Blücher, the German ambassador to Finland, had in his report to Berlin, mentioned the political consequences of the child transports. Among other things, they would bring Finland and Sweden closer to each other. He had also paid attention to the Nordic political orientation in Finland. He did not question or criticize the child transports. His main interest was to increase German political influence in Finland, and the Nordic political orientation was an obstacle. Fagerholm was politically ill-favoured by the Germans, because he had a strong Nordic political disposition and had criticised Germany´s activities in Norway. The criticism of child transports was at the same time criticism of Fagerholm. The official censorship organ of the Finnish government (VTL) denied the criticism of child transports in January 1942. The reasons were political. Statements made by members of the Finnish Parliament were also censored, because it was thought that they would offend the Swedes. In addition, the censorship organ used child transports as a means of active propaganda aimed at improving the relations between the two countries. The Finnish Parliament was informed in 1948 that about 15 000 Finnish children still remained in Sweden. These children would stay there permanently. In 1950 the members of the Agrarian Party in Finland stated that Finland should actively strive to get the children back. The party on the left (SKDL, the Democratic Movement of Finnish People) also focused on the unexpected consequences of the child transports. The Social Democrats, and largely Fagerholm, had been the main force in Finland behind the child transports. Members of the SKDL, controlled by Finland´s Communist Party, stated that the war time authorities were responsible for this war loss. Many of the Finnish parents could not get their children back despite repeated requests. The discussion of the problem became political, for example von Born, a member of the Swedish minority party RKP, related this problem to foreign policy by stating that the request to repatriate the Finnish children would have negative political consequences for the relations between Finland and Sweden. He emphasized expressing feelings of gratitude to the Swedes. After the war a new foreign policy was established by Prime Minister (1944 1946) and later President (1946 1956) Juho Kusti Paasikivi. The main cornerstone of this policy was to establish good relations with the Soviet Union. The other, often forgotten, cornerstone was to simultaneously establish good relations with other Nordic countries, especially Sweden, as a counterbalance. The unexpected results of the child evacuation, a Swedish initiative, had violated the good relations with Sweden. The motives of the Democratic Movement of Finnish People were much the same as those of the Patriotic Movement of Finnish People. Only the ideology was different. The Nordic political orientation was an obstacle to both parties. The position of the Democratic Movement of Finnish People was much better than that of the Patriotic Movement of Finnish People, because now one could clearly see the unexpected results, which included human tragedy for the many families who could not be re-united with their children despite their repeated requests. The Swedes questioned the figure given to the Finnish Parliament regarding the number of children permanently remaining in Sweden. This research agrees with the Swedes. In a calculation based on Swedish population registers, the number of these children is about 7 100. The reliability of this figure is increased by the fact that the child allowance programme began in Sweden in 1948. The prerequisite to have this allowance was that the child be in the Swedish population register. It was not necessary for the child to have Swedish nationality. The Finnish Parliament had false information about the number of Finnish children who remained in Sweden in 1942 and in 1950. There was no parliamentary control in Finland regarding child transports, because the decision was made by one cabinet member and speeches by MPs in the Finnish Parliament were censored, like all criticism regarding child transports to Sweden. In Great Britain parliamentary control worked better throughout the whole war, because the speeches regarding evacuation were not censored. At the beginning of the war certain members of the British Labour Party and the Welsh Nationalists were particularly outspoken about the scheme. Fagerholm does not discuss to any great extent the child transports in his memoirs. He does not evaluate the process and results as a whole. This research provides some possibilities for an evaluation of this sort. The Swedish medical reports give a clear picture of the physical condition of the Finnish children when arriving in Sweden. The transports actually revealed how bad the situation of the poorest children was. According to Titmuss, similar observations were made in Great Britain during the British evacuations. The child transports saved the lives of approximately 2 900 children. Most of these children were removed to Sweden to receive treatment for illnesses, but many among the healthy children were undernourished and some suffered from the effects of tuberculosis. The medical inspection in Finland was not thorough. If you compare the figure of 2 900 children saved and returned with the figure of about 7 100 children who remained permanently in Sweden, you may draw the conclusion that Finland as a country failed to benefit from the child transports, and that the whole operation was a political mistake with far-reaching consequenses. The basic goal of the operation was to save lives and have all the children return to Finland after the war. The difficulties with the repatriation of the children were mainly psychological. The level of child psychology in Finland at that time was low. One may question the report by Professor Martti Kaila regarding the adaptation of children to their families back in Finland. Anna Freud´s warnings concerning the difficulties that arise when child evacuees return are also valid in Finland. Freud viewed the emotional life of children in a way different from Kaila: the physical survival of a small child forces her to create strong emotional ties to the person who is looking after her. This, a characteristic of all small children, occurred with the Finnish children too, and it was something the political decision makers in Finland could not see during and after the war. It is a characteristic of all little children. Yet, such experiences were already evident during the Winter War. The best possible solution had been to limit the child transports only to children in need of medical treatment. Children from large and poor families had been helped by organising meals and by buying food from Denmark with Swedish money. Assisting Finland by all possible means should have been the basic goal of Fagerholm in September 1941, when the offer of child transports came from Sweden. Fagerholm felt gratitude towards the Swedes. The risks became clear to him only in 1943. The war children are today a rather scattered and diffuse group of people. Emotionally, part of these children remained in Sweden after the war. There is no clear collective memory, only individual memories; the collective memory of the war children has partly been shaped later through the activities of the war child associations. The main difference between the children evacuated in Finland (for example from Karelia to safer areas with their families) and the war children, who were sent abroad, is that the war children lack a shared story and experience with their families. They were outsiders . The whole matter is sensitive to many of such mothers and discussing the subject has often been avoided in families. The war-time censorship has continued in families through silence and avoidance and Finnish politicians and Finnish families had to face each other on this issue after the war. The lack of all-inclusive historical research has also prevented the formation of a collective awareness among war children returned to Finland or those remaining permanently abroad.. Knowledge of historical facts will help war-children by providing an opportunity to create an all-inclusive approach to the past. Personal experiences should be regarded as part of a large historical entity shadowed by war and where many political factors were at work in both Finland and Sweden. This means strengthening of the cognitive dimension discussed in Rüsen´s all-inclusive historical approach.
Resumo:
Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.
Resumo:
We report the first measurement of the cross section for Z boson pair production at a hadron collider. This result is based on a data sample corresponding to 1.9 fb-1 of integrated luminosity from ppbar collisions at sqrt{s} = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. In the llll channel, we observe three ZZ candidates with an expected background of 0.096^{+0.092}_{-0.063} events. In the llnunu channel, we use a leading-order calculation of the relative ZZ and WW event probabilities to discriminate between signal and background. In the combination of llll and llnunu channels, we observe an excess of events with a probability of $5.1\times 10^{-6}$ to be due to the expected background. This corresponds to a significance of 4.4 standard deviations. The measured cross section is sigma(ppbar -> ZZ) = 1.4^{+0.7}_{-0.6} (stat.+syst.) pb, consistent with the standard model expectation.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
We present a measurement of the top quark mass in the all-hadronic channel (\tt $\to$ \bb$q_{1}\bar{q_{2}}q_{3}\bar{q_{4}}$) using 943 pb$^{-1}$ of \ppbar collisions at $\sqrt {s} = 1.96$ TeV collected at the CDF II detector at Fermilab (CDF). We apply the standard model production and decay matrix-element (ME) to $\ttbar$ candidate events. We calculate per-event probability densities according to the ME calculation and construct template models of signal and background. The scale of the jet energy is calibrated using additional templates formed with the invariant mass of pairs of jets. These templates form an overall likelihood function that depends on the top quark mass and on the jet energy scale (JES). We estimate both by maximizing this function. Given 72 observed events, we measure a top quark mass of 171.1 $\pm$ 3.7 (stat.+JES) $\pm$ 2.1 (syst.) GeV/$c^{2}$. The combined uncertainty on the top quark mass is 4.3 GeV/$c^{2}$.
Resumo:
Volatile organic compounds (VOCs) are emitted into the atmosphere from natural and anthropogenic sources, vegetation being the dominant source on a global scale. Some of these reactive compounds are deemed major contributors or inhibitors to aerosol particle formation and growth, thus making VOC measurements essential for current climate change research. This thesis discusses ecosystem scale VOC fluxes measured above a boreal Scots pine dominated forest in southern Finland. The flux measurements were performed using the micrometeorological disjunct eddy covariance (DEC) method combined with proton transfer reaction mass spectrometry (PTR-MS), which is an online technique for measuring VOC concentrations. The measurement, calibration, and calculation procedures developed in this work proved to be well suited to long-term VOC concentration and flux measurements with PTR-MS. A new averaging approach based on running averaged covariance functions improved the determination of the lag time between wind and concentration measurements, which is a common challenge in DEC when measuring fluxes near the detection limit. The ecosystem scale emissions of methanol, acetaldehyde, and acetone were substantial. These three oxygenated VOCs made up about half of the total emissions, with the rest comprised of monoterpenes. Contrary to the traditional assumption that monoterpene emissions from Scots pine originate mainly as evaporation from specialized storage pools, the DEC measurements indicated a significant contribution from de novo biosynthesis to the ecosystem scale monoterpene emissions. This thesis offers practical guidelines for long-term DEC measurements with PTR-MS. In particular, the new averaging approach to the lag time determination seems useful in the automation of DEC flux calculations. Seasonal variation in the monoterpene biosynthesis and the detailed structure of a revised hybrid algorithm, describing both de novo and pool emissions, should be determined in further studies to improve biological realism in the modelling of monoterpene emissions from Scots pine forests. The increasing number of DEC measurements of oxygenated VOCs will probably enable better estimates of the role of these compounds in plant physiology and tropospheric chemistry. Keywords: disjunct eddy covariance, lag time determination, long-term flux measurements, proton transfer reaction mass spectrometry, Scots pine forests, volatile organic compounds
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.
Resumo:
The objective of this paper is to suggest a method that accounts for the impact of the volatility smile dynamics when performing scenario analysis for a portfolio consisting of vanilla options. As the volatility smile is documented to change at least with the level of implied at-the-money volatility, a suitable model is here included in the calculation process of the simulated market scenarios. By constructing simple portfolios of index options and comparing the ex ante risk exposure measured using different pricing methods to realized market values, ex post, the improvements of the incorporation of the model are monitored. The analyzed examples in the study generate results that statistically support that the most accurate scenarios are those calculated using the model accounting for the dynamics of the smile. Thus, we show that the differences emanating from the volatility smile are apparent and should be accounted for and that the methodology presented herein is one suitable alternative for doing so.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).