222 resultados para Martiri-Aparells i instruments
Resumo:
The aim of the study was to explore why the MuPSiNet project - a computer and network supported learning environment for the field of health care and social work - did not develop as expected. To grasp the problem some hypotheses were formulated. The hypotheses regarded the teachers' skills in and attitudes towards computing and their attitudes towards constructivist study methods. An online survey containing 48 items was performed. The survey targeted all the teachers within the field of health care and social work in the country, and it produced 461 responses that were analysed against the hypotheses. The reliability of the variables was tested using the Cronbach alpha coefficient and t-tests. Poor basic computing skills among the teachers combined with a vulnerable technical solution, and inadequate project management combined with lack of administrative models for transforming economic resources into manpower were the factors that turned out to play a decisive role in the project. Other important findings were that the teachers had rather poor skills and knowledge in computing, computer safety and computer supported instruction, and that these skills were significantly poorer among female teachers who were in majority in the sample. The fraction of teachers who were familiar with software for electronic patient records (EPR) was low. The attitudes towards constructivist teaching methods were positive, and further education seemed to utterly increase the teachers' readiness to use alternative teaching methods. The most important conclusions were the following: In order to integrate EPR software as a natural tool in teaching planning and documenting health care, it is crucial that the teachers have sufficient basic skills in computing and that more teachers have personal experience of using EPR software. In order for computer supported teaching to become accepted it is necessary to arrange with extensive further education for the teachers presently working, and for that further education to succeed it should be backed up locally among other things by sufficient support in matters concerning computer supported teaching. The attitudes towards computing showed significant gender differences. Based on the findings it is suggested that basic skills in computing should also include an awareness of data safety in relation to work in different kinds of computer networks, and that projects of this kind should be built up around a proper project organisation with sufficient resources. Suggestions concerning curricular development and further education are also presented. Conclusions concerning the research method were that reminders have a better effect, and that respondents tend to answer open-ended questions more verbosely in electronically distributed online surveys compared to traditional surveys. A method of utilising randomized passwords to guarantee respondent anonymity while maintaining sample control is presented. Keywords: computer-assisted learning, computer-assisted instruction, health care, social work, vocational education, computerized patient record, online survey
Resumo:
This study aims to examine the operations and significance of the Klemetti Institute (Klemetti-Opisto) as a developer of Finnish music culture from 1953 to 1968 during the term of office of the Institute s founder and first director, Arvo Vainio. The Klemetti Institute was originally established as a choir institute, but soon expanded to offer a wide range of music courses. In addition to providing courses for choir leaders and singers, the Institute began its orchestral activities as early as the mid-1950s. Other courses included ear training seminars as well as courses for young people s music instructors and in playing the kantele (a Finnish string instrument) and solo singing. More than 20 types of courses were offered over the 16-year period. The Klemetti Institute s courses were incorporated into the folk high school courses offered by the Orivesi Institute (Oriveden Opisto) and were organised during the summer months of June and July. In addition to funding based on the Folk High School Act, financial assistance was obtained from various foundations and funds, such as the Wihuri Foundation. This study is linked to the context of historical research. I examine the Klemetti Institute s operations chronologically, classifying instruction into different course types, and analyse concert activities primarily in the section on the Institute s student union. The source material includes the Klemetti Institute archives, which consist of Arvo Vainio s correspondence, student applications, register books and cards, journals and student lists, course albums and nearly all issues of the Klemettiläinen bulletin. In addition, I have used focused interviews and essays to obtain extensive data from students and teachers. I concentrate on primary school teachers, who accounted for the majority of course participants. A total of more than 2,300 people participated in the courses, nearly half of whom took courses during at least two summers. Primary school teachers accounted for 50% to 70% of the participants in most courses and constituted an even larger share of participants in some courses, such as the music instructor course. The Klemetti Institute contributed to the expansion throughout Finland of a new ideal for choral tone. This involved delicate singing which strives for tonal purity and expressiveness. Chamber choirs had been virtually unheard of in Finland, but the Klemetti Institute Chamber Choir popularised them. Chamber choirs are characterised by an extensive singing repertoire ranging from the Middle Ages to the present. As the name suggests, chamber choirs were originally rather small mixed choirs. Delicate singing meant the avoidance of extensive vibrato techniques and strong, heavy forte sounds, which had previously been typical of Finnish choirs. Those opposing and shunning this new manner of singing called it ghost singing . The Klemetti Institute s teachers included Finland s most prominent pedagogues and artists. As the focused essays, or reminiscences as I call them, show, their significance for the students was central. I examine extensively the Klemetti Institute s enthusiastic atmosphere, which during the early years was characterised by what some writers described as a hunger for music . In addition to distributing a new tonal ideal and choir repertoire, the Klemetti Institute also distributed new methods of music education, thus affecting the music teaching of Finnish primary schools, in particular. The Orff approach, which included various instruments, became well known, although some of Orff s ideas, such as improvisation and physical exercise, were initially unfamiliar. More important than the Orff approach was the in-depth teaching at the Klemetti Institute of the Hungarian ear training method known as the Kodály method. Many course participants were among those launching specialist music classes in schools, and the method became the foundation for music teaching in many such schools. The Klemetti Institute was also a pioneer in organising orchestra camps for young people. The Klemetti Institute promoted Finnish music culture and played an important role in the continuing music education of primary school teachers. Keywords: adult education, Grundtvigian philosophy, popular enlightenment, Klemetti Institute, Kodály method, choir singing, choir conducting, music history, music education, music culture, music camp, Orff approach, Orff-Schulwerk, Orivesi Institute, instrument teaching, free popular education, communality, solo singing, voice production
Resumo:
The basic goal of a proteomic microchip is to achieve efficient and sensitive high throughput protein analyses, automatically carrying out several measurements in parallel. A protein microchip would either detect a single protein or a large set of proteins for diagnostic purposes, basic proteome or functional analysis. Such analyses would include e.g. interactomics, general protein expression studies, detecting structural alterations or secondary modifications. Visualization of the results may occur by simple immunoreactions, general or specific labelling, or mass spectrometry. For this purpose we have manufactured chip-based proteome analysis devices that utilize the classical polymer gel electrophoresis technology to run one and two-dimensional gel electrophoresis separations of proteins in just a smaller size. In total, we manufactured three functional prototypes of which one performed a miniaturized one-dimensional gel electrophoresis (1-DE) separation, the second and third preformed two-dimensional gel electrophoresis (2-DE) separations. These microchips were successfully used to separate and characterize a set of predefined standard proteins, cell and tissue samples. Also, the miniaturized 2-DE (ComPress-2DE) chip presents a novel way of combining the 1st and 2nd dimensional separations, thus avoiding manual handling of the gels, eliminate cross-contamination, and make analyses faster and repeatability better. They all showed the advantages of miniaturization over the commercial devices; such as fast analysis, low sample- and reagent consumption, high sensitivity, high repeatability and inexpensive performance. All these instruments have the potential to be fully automated due to their easy-to-use set-up.
Resumo:
End-stage renal disease is an increasingly common pathologic condition, with a current incidence of 87 per million inhabitants in Finland. It is the end point of various nephropathies, most common of which is the diabetic nephropathy. This thesis focuses on exploring the role of nephrin in the pathogenesis of diabetic nephropathy. Nephrin is a protein of the glomerular epithelial cell, or podocyte, and it appears to have a crucial function as a component of the filtration slit diaphragm in the kidney glomeruli. Mutations in the nephrin gene NPHS1 lead to massive proteinuria. Along with the originally described location in the podocyte, nephrin has now been found to be expressed in the brain, testis, placenta and pancreatic beta cells. In type 1 diabetes, the fundamental pathologic event is the autoimmune destruction of the beta cells. Autoantibodies against various beta cell antigens are generated during this process. Due to the location of nephrin in the beta cell, we hypothesized that patients with type 1 diabetes may present with nephrin autoantibodies. We also wanted to test whether such autoantibodies could be involved in the pathogenesis of diabetic nephropathy. The puromycin aminonucleoside nephrosis model in the rat, the streptozotocin model in the rat, and the non-obese diabetic mice were studied by immunochemical techniques, in situ -hybridization and the polymerase chain reaction -based methods to resolve the expression of nephrin mRNA and protein in experimental nephropathies. To test the effect of antiproteinuric therapies, streptozotocin-treated rats were also treated with aminoguanidine or perindopril. To detect nephrin antibodies we developed a radioimmunoprecipitation assay and analyzed follow-up material of 66 patients with type 1 diabetes. In the puromycin aminonucleoside nephrosis model, the nephrin expression level was uniformly decreased together with the appearance of proteinuria. In the streptozotocin-treated rats and in non-obese diabetic mice, the nephrin mRNA and protein expression levels were seen to increase in the early stages of nephropathy. However, as observed in the streptozotocin rats, in prolonged diabetic nephropathy the expression level decreased. We also found out that treatment with perindopril could not only prevent proteinuria but also a decrease in nephrin expression in streptozotocin-treated rats. Aminoguanidine did not have an effect on nephrin expression, although it could attenuate the proteinuria. Circulating antibodies to nephrin in patients with type 1 diabetes were found, although there was no correlation with the development of diabetic nephropathy. At diagnosis, 24% of the patients had these antibodies, while at 2, 5 and 10 years of disease duration the respective proportions were 23%, 14% and 18%. During the total follow-up of 16 to 19 years after diagnosis of diabetes, 14 patients had signs of nephropathy and 29% of them tested positive for nephrin autoantibodies in at least one sample. In conclusion, this thesis work could show changes of nephrin expression along with the development of proteinuria. The autoantibodies against nephrin are likely generated in the autoimmune process leading to type 1 diabetes. However, according to the present work it is unlikely that these autoantibodies are contributing significantly to the development of diabetic nephropathy.
Resumo:
There are several reasons for increasing the usage of forest biomass for energy in Finland. Apart from the fact that forest biomass is a CO2 -neutral energy source, it is also a domestic resource distributed throughout the country. Usage of forest biomass in the form of logging residues decreases Finland’s dependence of energy import and increases both incomes and employment. Wood chips are mainly made from logging residues, which constitute 64 % of the raw material. A large-scale use of forest biomass requires heed also to the potential negative aspects. Forest bioenergy is used extensively, but its impacts on the forests soil nutrition and carbon balance has not been studied much. Nor have there been many studies on the heavy metal or chlorine content of logging residues. The goal of this study was to examine the content of carbon, macronutrients, heavy metals and other for the combustion harmful substances in Scots pine and Norway spruce wood chips, and to estimate the effect of harvesting of logging residues on the forests carbon and nutrient balance. Another goal was to examine the energy content of the clear cut remains. The Wood chips for this study were gathered from pine and spruce dominated clear cut sites in southern Finland, in the costal forests between Hankoo and Siuntio. The number of sample locations were 29, and the average area was 3,15 ha and the average timber volume 212,6 m3 ha -1. The average logged timber volume was for Scots pine timber 70 m3 ha -1 and for Norway spruce timber 124 m3 ha -1 and for deciduous timber (birch and alder) 18,5 m3 ha -1. The proportion of spruce in the logging residues and the stand-volume were relevant for how much nutrients were taken from the forest ecosystem when harvesting logging residues. In this study it was noted that the nutrient content of the logging residues clearly increased when the percentage of spruce in the timber volume increased. The S, K, Na and Cl -contents in the logging residues in this study increased with an increasing percentage of spruce, which is probably due to the fact that the spruce is an effective collector of atmospheric dry-deposition. The amounts of nutrients that were lost when harvesting logging residues were less than those referred to in the literature. Within a circulation period (100 years), the forest soil gets substantially more nutrients from atmospheric deposition, litter fall and weathering than is lost through harvesting of logging residues after a clear cut. Harvesting of the logging residues makes for a relatively modest increase of the quantity of carbon that is removed from the forest compared to traditional forestry. Due to the fact that the clear cut remains in my study showed a high content of chlorine, there is a risk of corrosion in connection to the incineration of the logging residues in power plants especially at coastal areas/forests. The risk of sulphur -related corrosion is probably rather small, because S concentrations are relatively low in woodchips. The clear cut remains showed rather high heavy metal contents. If the heavy metal contents in this study are representative for the clear cut remains in the coastal forests generally, there might be reason to exert some caution when using the ash for forest fertilizing purposes.
Resumo:
Diet is a major player in the maintenance of health and onset of many diseases of public health importance. The food choice is known to be largely influenced by sensory preferences. However, in many cases it is unclear whether these preferences and dietary behaviors are innate or acquired. The aim of this thesis work was to study the extent to which the individual differences in dietary responses, especially in liking for sweet taste, are influenced by genetic factors. Several traits measuring the responses to sweetness and other dietary variables were applied in four studies: in British (TwinsUK) and Finnish (FinnTwin12 and FinnTwin16) twin studies and in a Finnish migraine family study. All the subjects were adults and they participated in chemosensory measurements (taste and smell tests) and filled in food behavior questionnaires. Further, it was studied, whether the correlations among the variables are mediated by genetic or environmental factors and where in the genome the genes influencing the heritable traits are located. A study of young adult Finnish twins (FinnTwin16, n=4388) revealed that around 40% of the food use is attributable to genetic factors and that the common, childhood environment does not affect the food use even shortly after moving from the parents home. Both the family study (n=146) and the twin studies (British twins, n=663) showed that around half of the variation in the liking for sweetness is inherited. The same result was obtained both by the chemosensory measurements (heritability 41-49%) and the questionnaire variables (heritability 31-54%). By contrast, the intensity perception of sweetness or the responses to saltiness were not influenced by genetic factors. Further, a locus influencing the use-frequency of sweet foods was identified on chromosome 16p. A closer examination of the relationships among the variables based on 663 British twins revealed that several genetic and environmental correlations exist among the different measures of liking for sweetness. However, these correlations were not very strong (range 0.06-0.55) implying that the instruments used measure slightly different aspects of the phenomenon. In addition, the assessment of the associations among responses to fatty foods, dieting behaviors, and body mass index in twin populations (TwinsUK n=1027 and FinnTwin12 n=299) showed that the dieting behaviors (cognitive restraint, uncontrolled eating, and emotional eating) mediate the relationship between obesity and diet. In conclusion, the work increased the understanding of the background variables of human eating behavior. Genetic effects were shown to underlie the variation of many dietary traits, such as liking for sweet taste, use of sweet foods, and dieting behaviors. However, the responses to salty taste were shown to be mainly determined by environmental factors and thus should more easily be modifiable by dietary education, exposure, and learning than sweet taste preferences. Although additional studies are needed to characterize the genetic element located on chromosome 16 that influences the use-frequency of sweet foods, the results underline the importance of inherited factors on human eating behavior.
Resumo:
Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.
Resumo:
Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.
Resumo:
The purpose of this study was to evaluate intensity, productivity and efficiency in agriculture in Finland and show implications for N and P fertiliser management. Environmental concerns relating to agricultural production have been and still are focused on arguments about policies that affect agriculture. These policies constrain production while demand for agricultural products such as food, fibre and energy continuously increase. Therefore the importance of increasing productivity is a great challenge to agriculture. Over the last decades producers have experienced several large changes in the production environment such as the policy reform when Finland joined the EU 1995. Other and market changes occurred with the further EU enlargement with neighbouring countries in 2005 and with the decoupling of supports over the 2006-2007 period. Decreasing prices a decreased number of farmers and decreased profitability in agricultural production have resulted from these changes and constraints and of technological development. It is known that the accession to the EU 1995 would herald changes in agriculture. Especially of interest was how the sudden changes in prices of commodities on especially those of cereals, decreased by 60%, would influence agricultural production. The knowledge of properties of the production function increased in importance as a consequence of price changes. A research on the economic instruments to regulate productions was carried out and combined with earlier studies in paper V. In paper I the objective was to compare two different technologies, the conventional farming and the organic farming, determine differences in productivity and technical efficiency. In addition input specific or environmental efficiencies were analysed. The heterogeneity of agricultural soils and its implications were analysed in article II. In study III the determinants of technical inefficiency were analysed. The aspects and possible effects of the instability in policies due to a partial decoupling of production factors and products were studied in paper IV. Consequently connection between technical efficiency based on the turnover and the sales return was analysed in this study. Simple economic instruments such as fertiliser taxes have a direct effect on fertiliser consumption and indirectly increase the value of organic fertilisers. However, fertiliser taxes, do not fully address the N and P management problems adequately and are therefore not suitable for nutrient management improvements in general. Productivity of organic farms is lower on average than conventional farms and the difference increases when looking at selling returns only. The organic sector needs more research and development on productivity. Livestock density in organic farming increases productivity, however, there is an upper limit to livestock densities on organic farms and therefore nutrient on organic farms are also limited. Soil factors affects phosphorous and nitrogen efficiency. Soils like sand and silt have lower input specific overall efficiency for nutrients N and P. Special attention is needed for the management on these soils. Clay soils and soils with moderate clay content have higher efficiency. Soil heterogeneity is cause for an unavoidable inefficiency in agriculture.
Resumo:
Climate change is the single biggest environmental problem in the world at the moment. Although the effects are still not fully understood and there is considerable amount of uncertainty, many na-tions have decided to mitigate the change. On the societal level, a planner who tries to find an eco-nomically optimal solution to an environmental pollution problem seeks to reduce pollution from the sources where reductions are most cost-effective. This study aims to find out how effective the instruments of the agricultural policy are in the case of climate change mitigation in Finland. The theoretical base of this study is the neoclassical economic theory that is based on the assumption of a rational economic agent who maximizes his own utility. This theoretical base has been widened towards the direction clearly essential to the matter: the theory of environmental eco-nomics. Deeply relevant to this problem and central in the theory of environmental economics are the concepts of externalities and public goods. What are also relevant are the problems of global pollution and non-point-source pollution. Econometric modelling was the method that was applied to this study. The Finnish part of the AGMEMOD-model, covering the whole EU, was used for the estimation of the development of pollution. This model is a seemingly recursive, partially dynamic partial-equilibrium model that was constructed to predict the development of Finnish agricultural production of the most important products. For the study, I personally updated the model and also widened its scope in some relevant matters. Also, I devised a table that can calculate the emissions of greenhouse gases according to the rules set by the IPCC. With the model I investigated five alternative scenarios in comparison to the base-line scenario of Agenda 2000 agricultural policy. The alternative scenarios were: 1) the CAP reform of 2003, 2) free trade on agricultural commodities, 3) technological change, 4) banning the cultivation of organic soils and 5) the combination of the last three scenarios as the maximal achievement in reduction. The maximal achievement in the alternative scenario 5 was 1/3 of the level achieved on the base-line scenario. CAP reform caused only a minor reduction when com-pared to the base-line scenario. Instead, the free trade scenario and the scenario of technological change alone caused a significant reduction. The biggest single reduction was achieved by banning the cultivation of organic land. However, this was also the most questionable scenario to be real-ized, the reasons for this are further elaborated in the paper. The maximal reduction that can be achieved in the Finnish agricultural sector is about 11 % of the emission reduction that is needed to comply with the Kyoto protocol.
Resumo:
This study evaluates how the advection of precipitation, or wind drift, between the radar volume and ground affects radar measurements of precipitation. Normally precipitation is assumed to fall vertically to the ground from the contributing volume, and thus the radar measurement represents the geographical location immediately below. In this study radar measurements are corrected using hydrometeor trajectories calculated from measured and forecasted winds, and the effect of trajectory-correction on the radar measurements is evaluated. Wind drift statistics for Finland are compiled using sounding data from two weather stations spanning two years. For each sounding, the hydrometeor phase at ground level is estimated and drift distance calculated using different originating level heights. This way the drift statistics are constructed as a function of range from radar and elevation angle. On average, wind drift of 1 km was exceeded at approximately 60 km distance, while drift of 10 km was exceeded at 100 km distance. Trajectories were calculated using model winds in order to produce a trajectory-corrected ground field from radar PPI images. It was found that at the upwind side from the radar the effective measuring area was reduced as some trajectories exited the radar volume scan. In the downwind side areas near the edge of the radar measuring area experience improved precipitation detection. The effect of trajectory-correction is most prominent in instant measurements and diminishes when accumulating over longer time periods. Furthermore, measurements of intensive and small scale precipitation patterns benefit most from wind drift correction. The contribution of wind drift on the uncertainty of estimated Ze (S) - relationship was studied by simulating the effect of different error sources to the uncertainty in the relationship coefficients a and b. The overall uncertainty was assumed to consist of systematic errors of both the radar and the gauge, as well as errors by turbulence at the gauge orifice and by wind drift of precipitation. The focus of the analysis is error associated with wind drift, which was determined by describing the spatial structure of the reflectivity field using spatial autocovariance (or variogram). This spatial structure was then used with calculated drift distances to estimate the variance in radar measurement produced by precipitation drift, relative to the other error sources. It was found that error by wind drift was of similar magnitude with error by turbulence at gauge orifice at all ranges from radar, with systematic errors of the instruments being a minor issue. The correction method presented in the study could be used in radar nowcasting products to improve the estimation of visibility and local precipitation intensities. The method however only considers pure snow, and for operational purposes some improvements are desirable, such as melting layer detection, VPR correction and taking solid state hydrometeor type into account, which would improve the estimation of vertical velocities of the hydrometeors.
Resumo:
It has been known for decades that particles can cause adverse health effects as they are deposited within the respiratory system. Atmospheric aerosol particles influence climate by scattering solar radiation but aerosol particles act also as the nuclei around which cloud droplets form. The principal objectives of this thesis were to investigate the chemical composition and the sources of fine particles in different environments (traffic, urban background, remote) as well as during some specific air pollution situations. Quantifying the climate and health effects of atmospheric aerosols is not possible without detailed information of the aerosol chemical composition. Aerosol measurements were carried out at nine sites in six countries (Finland, Germany, Czech, Netherlands, Greece and Italy). Several different instruments were used in order to measure both the particulate matter (PM) mass and its chemical composition. In the off-line measurements the samples were collected first on a substrate or filter and gravimetric and chemical analysis were conducted in the laboratory. In the on-line measurements the sampling and analysis were either a combined procedure or performed successively within the same instrument. Results from the impactor samples were analyzed by the statistical methods. This thesis comprises also a work where a method for the determination carbonaceous matter size distribution by using a multistage impactor was developed. It was found that the chemistry of PM has usually strong spatial, temporal and size-dependent variability. In the Finnish sites most of the fine PM consisted of organic matter. However, in Greece sulfate dominated the fine PM and in Italy nitrate made the largest contribution to the fine PM. Regarding the size-dependent chemical composition, organic components were likely to be enriched in smaller particles than inorganic ions. Data analysis showed that organic carbon (OC) had four major sources in Helsinki. Secondary production was the major source in Helsinki during spring, summer and fall, whereas in winter biomass combustion dominated OC. The significant impact of biomass combustion on OC concentrations was also observed in the measurements performed in Central Europe. In this thesis aerosol samples were collected mainly by the conventional filter and impactor methods which suffered from the long integration time. However, by filter and impactor measurements chemical mass closure was achieved accurately, and a simple filter sampling was found to be useful in order to explain the sources of PM on the seasonal basis. The online instruments gave additional information related to the temporal variations of the sources and the atmospheric mixing conditions.
Resumo:
Det finns en mätbar entitet i all levande materia, nämligen förhållandet mellan ett atomslags lätta och tunga isotop. Denna entitet divideras med en definierad referens och ett värde erhålls som brukar anges i promille ( ). Värdet är ett mått på isotopfraktionering. Genom att studera ekologi, taxonomi, kretslopp, isotopeffekter, dokumenterad data och metabola effekter kan man skissera över vad isotopfraktionen betyder. Migrationsbeteende, trofinätverk, dieter, miljö- och klimatdata kan rekonstrueras i tid och rum. Vid rekonstruktioner av gången tid har de museala samlingarna en nyckelposition. I litteraturöversikten tas teori, praktik och forskning upp för att påvisa metodens faror och användningsområden.
Resumo:
Most countries of Europe, as well as many countries in other parts of the world, are experiencing an increased impact of natural hazards. It is often speculated, but not yet proven, that climate change might influence the frequency and magnitude of certain hydro-meteorological natural hazards. What has certainly been observed is a sharp increase in financial losses caused by natural hazards worldwide. Eventhough Europe appears to be a space that is not affected by natural hazards to such catastrophic extents as other parts of the world are, the damages experienced here are certainly increasing too. Natural hazards, climate change and, in particular, risks have therefore recently been put high on the political agenda of the EU. In the search for appropriate instruments for mitigating impacts of natural hazards and climate change, as well as risks, the integration of these factors into spatial planning practices is constantly receiving higher attention. The focus of most approaches lies on single hazards and climate change mitigation strategies. The current paradigm shift of climate change mitigation to adaptation is used as a basis to draw conclusions and recommendations on what concepts could be further incorporated into spatial planning practices. Especially multi-hazard approaches are discussed as an important approach that should be developed further. One focal point is the definition and applicability of the terms natural hazard, vulnerability and risk in spatial planning practices. Especially vulnerability and risk concepts are so many-fold and complicated that their application in spatial planning has to be analysed most carefully. The PhD thesis is based on six published articles that describe the results of European research projects, which have elaborated strategies and tools for integrated communication and assessment practices on natural hazards and climate change impacts. The papers describe approaches on local, regional and European level, both from theoretical and practical perspectives. Based on these, passed, current and future potential spatial planning applications are reviewed and discussed. In conclusion it is recommended to shift from single hazard assessments to multi-hazard approaches, integrating potential climate change impacts. Vulnerability concepts should play a stronger role than present, and adaptation to natural hazards and climate change should be more emphasized in relation to mitigation. It is outlined that the integration of risk concepts in planning is rather complicated and would need very careful assessment to ensure applicability. Future spatial planning practices should also consider to be more interdisciplinary, i.e. to integrate as many stakeholders and experts as possible to ensure the sustainability of investments.
Resumo:
Ett sätt att förbättra resultat i informationssökning är frågeutvidgning. Vid frågeutvidgning utökas användarens ursprungliga fråga med termer som berör samma ämne. Frågor som har stort likhetsvärde med ett dokument kan tänkas beskriva dokumentet väl och kan därför fungera som en källa för goda utvidgningstermer. Om tidigare frågor finns lagrade kan termer som hittas med hjälp av dessa användas som kandidater för frågeutvidgningstermer. I avhandlingen presenteras och jämförs tre metoder för användning av tidigare frågor vid frågeutvidgning. För att evaluera metodernas effektivitet, jämförs de med hjälp av sökmaskinen Lucene och en liten samling dokument som berör cancerforskning. Som jämförelseresultat används de omodifierade frågorna och en enkel pseudorelevansåterkopplingsmetod som inte använder sig av tidigare frågor. Ingen av frågeutvidgningsmetoderna klarade sig speciellt bra, vilket beror på att dokumentsamlingen och testfrågorna utgör en svår omgivning för denna typ av metoder.