101 resultados para Applications for european funds
em Helda - Digital Repository of University of Helsinki
Resumo:
The tackling of coastal eutrophication requires water protection measures based on status assessments of water quality. The main purpose of this thesis was to evaluate whether it is possible both scientifically and within the terms of the European Union Water Framework Directive (WFD) to assess the status of coastal marine waters reliably by using phytoplankton biomass (ww) and chlorophyll a (Chl) as indicators of eutrophication in Finnish coastal waters. Empirical approaches were used to study whether the criteria, established for determining an indicator, are fulfilled. The first criterion (i) was that an indicator should respond to anthropogenic stresses in a predictable manner and has low variability in its response. Summertime Chl could be predicted accurately by nutrient concentrations, but not from the external annual loads alone, because of the rapid affect of primary production and sedimentation close to the loading sources in summer. The most accurate predictions were achieved in the Archipelago Sea, where total phosphorus (TP) and total nitrogen (TN) alone accounted for 87% and 78% of the variation in Chl, respectively. In river estuaries, the TP mass-balance regression model predicted Chl most accurately when nutrients originated from point-sources, whereas land-use regression models were most accurate in cases when nutrients originated mainly from diffuse sources. The inclusion of morphometry (e.g. mean depth) into nutrient models improved accuracy of the predictions. The second criterion (ii) was associated with the WFD. It requires that an indicator should have type-specific reference conditions, which are defined as "conditions where the values of the biological quality elements are at high ecological status". In establishing reference conditions, the empirical approach could only be used in the outer coastal water types, where historical observations of Secchi depth of the early 1900s are available. The most accurate prediction was achieved in the Quark. In the inner coastal water types, reference Chl, estimated from present monitoring data, are imprecise - not only because of the less accurate estimation method but also because the intrinsic characteristics, described for instance by morphometry, vary considerably inside these extensive inner coastal types. As for phytoplankton biomass, the reference values were less accurate than in the case of Chl, because it was possible to estimate reference conditions for biomass only by using the reconstructed Chl values, not the historical Secchi observations. An paleoecological approach was also applied to estimate annual average reference conditions for Chl. In Laajalahti, an urban embayment off Helsinki, strongly loaded by municipal waste waters in the 1960s and 1970s, reference conditions prevailed in the mid- and late 1800s. The recovery of the bay from pollution has been delayed as a consequence of benthic release of nutrients. Laajalahti will probably not achieve the good quality objectives of the WFD on time. The third criterion (iii) was associated with coastal management including the resources it has available. Analyses of Chl are cheap and fast to carry out compared to the analyses of phytoplankton biomass and species composition; the fact which has an effect on number of samples to be taken and thereby on the reliability of assessments. However, analyses on phytoplankton biomass and species composition provide more metrics for ecological classification, the metrics which reveal various aspects of eutrophication contrary to what Chl alone does.
Resumo:
X-ray Raman scattering and x-ray emission spectroscopies were used to study the electronic properties and phase transitions in several condensed matter systems. The experimental work, carried out at the European Synchrotron Radiation Facility, was complemented by theoretical calculations of the x-ray spectra and of the electronic structure. The electronic structure of MgB2 at the Fermi level is dominated by the boron σ and π bands. The high density of states provided by these bands is the key feature of the electronic structure contributing to the high critical temperature of superconductivity in MgB2. The electronic structure of MgB2 can be modified by atomic substitutions, which introduce extra electrons or holes into the bands. X ray Raman scattering was used to probe the interesting σ and π band hole states in pure and aluminum substituted MgB2. A method for determining the final state density of electron states from experimental x-ray Raman scattering spectra was examined and applied to the experimental data on both pure MgB2 and on Mg(0.83)Al(0.17)B2. The extracted final state density of electron states for the pure and aluminum substituted samples revealed clear substitution induced changes in the σ and π bands. The experimental work was supported by theoretical calculations of the electronic structure and x-ray Raman spectra. X-ray emission at the metal Kβ line was applied to the studies of pressure and temperature induced spin state transitions in transition metal oxides. The experimental studies were complemented by cluster multiplet calculations of the electronic structure and emission spectra. In LaCoO3 evidence for the appearance of an intermediate spin state was found and the presence of a pressure induced spin transition was confirmed. Pressure induced changes in the electronic structure of transition metal monoxides were studied experimentally and were analyzed using the cluster multiplet approach. The effects of hybridization, bandwidth and crystal field splitting in stabilizing the high pressure spin state were discussed. Emission spectroscopy at the Kβ line was also applied to FeCO3 and a pressure induced iron spin state transition was discovered.
Resumo:
This report has been written as part of the E-ruralnet –project that addresses e-learning as a means for enhancing lifelong learning opportunities in rural areas, with emphasis on SMEs, micro-enterprises, self-employed and persons seeking employment. E-ruralnet is a European network project part-funded by the European Commission in the context of the Lifelong Learning Programme, Transversal projects-ICT. This report aims to address two issues identified as requiring attention in the previous Observatory study: firstly, access to e-learning for rural areas that have not adequate ICT infrastructure; and secondly new learning approaches introduced through new interactive ICT tools such as web 2.0., wikis, podcasts etc. The possibility of using alternative technology in addition to computers is examined (mobile telephones, DVDs) as well as new approaches to learning (simulation, serious games). The first part of the report examines existing literature on e-learning and what e-learning is all about. Institutional users, learners and instructors/teachers are all looked at separately. We then turn to the implementation of e-learning from the organizational point of view and focus on quality issues related to e-learning. The report includes a separate chapter or e-learning from the rural perspective since most of Europe is geographically speaking rural and the population in those areas is that which could most benefit from the possibilities introduced by the e-learning development. The section titled “Alternative media”, in accordance with the project terminology, looks at standalone technology that is of particular use to rural areas without proper internet connection. It also evaluates the use of new tools and media in e-learning and takes a look at m-learning. Finally, the use of games, serious games and simulations in learning is considered. Practical examples and cases are displayed in a box to facilitate pleasant reading.
Resumo:
Banks are important as they have a central role in the financial system, where funds are channelled either through financial intermediaries, such as banks, or through financial markets, hence promoting growth in any economy. Recently, we have been reminded of the drawbacks of the central role of banks. The current financial crisis, which started out as a sub-prime mortgage crisis in the US, has become a global financial crisis with substantial impact on the real economy in many countries. Some of the roots to the current financial crisis can be sought in the changing role of banks and in bank corporate governance. Moreover, the substantial revitalising measures taken have been justified by the central role of banks. Not only are banks important, they are also very special. The fact that banks are regulated in conjunction with greater opacity, make bank corporate governance different from corporate governance in non-bank companies. Surprisingly little is, however, known about bank corporate governance, in particularly, in a European setting. Hence, the objective of this doctoral thesis is to provide new insights in this research area by examining banks from 37 different European countries. Each of the three essays included in the doctoral thesis examines a particular aspect of bank corporate governance. In the first essay the interaction between the regulatory environment a bank operates in and its ownership structure is explored. Indicators of the severity of the moral hazard problem induced by the deposit insurance system and implicit too-big-to-fail government guarantee, particular features of deposit insurance systems as well as legal protection of shareholders, legal origin of a country and level of integration to the European community are used in the analysis. The empirical findings confirm previous findings on the link between legal protection of shareholders and ownership structure. Moreover, they show that differences in deposit insurance system features can explain some of the differences in ownership structure across European banks. In the second essay the impact of management and board ownership on the profitability of banks with different strategy is examined. The empirical findings suggest that the efficiency of these two particular corporate governance mechanisms varies with the characteristics of the agency problem faced by the bank. More specifically, management ownership is important in opaque non-traditional banks, whereas board ownership is important in traditional banks, where deposit insurance reduces the monitoring incentives of outsiders. The higher profitability does, however, go together with higher risk. In the third essay the profitability and risk of commercial, savings and cooperative banks are compared. The empirical findings suggest that distinct operational and ownership characteristics rather than only the mere fact that a bank is a commercial, savings or cooperative bank explain the profitability and risk differences. The main insight from the three essays is that a number of different aspects should be addressed simultaneously in order to give the complexity of bank corporate governance justice.
Resumo:
The integrated European debt capital market has undoubtedly broadened the possibilities for companies to access funding from the public and challenged investors to cope with an ever increasing complexity of its market participants. Well into the Euro-era, it is clear that the unified market has created potential for all involved parties, where investment opportunities are able to meet a supply of funds from a broad geographical area now summoned under a single currency. Europe’s traditionally heavy dependency on bank lending as a source of debt capital has thus been easing as corporate residents are able to tap into a deep and liquid capital market to satisfy their funding needs. As national barriers eroded with the inauguration of the Euro and interest rates for the EMU-members converged towards over-all lower yields, a new source of debt capital emerged to the vast majority of corporate residents under the new currency and gave an alternative to the traditionally more maturity-restricted bank debt. With increased sophistication came also an improved knowledge and understanding of the market and its participants. Further, investors became more willing to bear credit risk, which opened the market for firms of ever lower creditworthiness. In the process, the market as a whole saw a change in the profile of issuers, as non-financial firms increasingly sought their funding directly from the bond market. This thesis consists of three separate empirical studies on how corporates fund themselves on the European debt capital markets. The analysis focuses on a firm’s access to and behaviour on the capital market, subsequent the decision to raise capital through the issuance of arm’s length debt on the bond market. The specific areas considered are contributing to our knowledge in the fields of corporate finance and financial markets by considering explicitly firms’ primary market activities within the new market area. The first essay explores how reputation of an issuer affects its debt issuance. Essay two examines the choice of interest rate exposure on newly issued debt and the third and final essay explores pricing anomalies on corporate debt issues.
Resumo:
In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms
Resumo:
In the thesis it is discussed in what ways concepts and methodology developed in evolutionary biology can be applied to the explanation and research of language change. The parallel nature of the mechanisms of biological evolution and language change is explored along with the history of the exchange of ideas between these two disciplines. Against this background computational methods developed in evolutionary biology are taken into consideration in terms of their applicability to the study of historical relationships between languages. Different phylogenetic methods are explained in common terminology, avoiding the technical language of statistics. The thesis is on one hand a synthesis of earlier scientific discussion, and on the other an attempt to map out the problems of earlier approaches in addition to finding new guidelines in the study of language change on their basis. Primarily literature about the connections between evolutionary biology and language change, along with research articles describing applications of phylogenetic methods into language change have been used as source material. The thesis starts out by describing the initial development of the disciplines of evolutionary biology and historical linguistics, a process which right from the beginning can be seen to have involved an exchange of ideas concerning the mechanisms of language change and biological evolution. The historical discussion lays the foundation for the handling of the generalised account of selection developed during the recent few decades. This account is aimed for creating a theoretical framework capable of explaining both biological evolution and cultural change as selection processes acting on self-replicating entities. This thesis focusses on the capacity of the generalised account of selection to describe language change as a process of this kind. In biology, the mechanisms of evolution are seen to form populations of genetically related organisms through time. One of the central questions explored in this thesis is whether selection theory makes it possible to picture languages are forming populations of a similar kind, and what a perspective like this can offer to the understanding of language in general. In historical linguistics, the comparative method and other, complementing methods have been traditionally used to study the development of languages from a common ancestral language. Computational, quantitative methods have not become widely used as part of the central methodology of historical linguistics. After the fading of a limited popularity enjoyed by the lexicostatistical method since the 1950s, only in the recent years have also the computational methods of phylogenetic inference used in evolutionary biology been applied to the study of early language history. In this thesis the possibilities offered by the traditional methodology of historical linguistics and the new phylogenetic methods are compared. The methods are approached through the ways in which they have been applied to the Indo-European languages, which is the most thoroughly investigated language family using both the traditional and the phylogenetic methods. The problems of these applications along with the optimal form of the linguistic data used in these methods are explored in the thesis. The mechanisms of biological evolution are seen in the thesis as parallel in a limited sense to the mechanisms of language change, however sufficiently so that the development of a generalised account of selection is deemed as possibly fruiful for understanding language change. These similarities are also seen to support the validity of using phylogenetic methods in the study of language history, although the use of linguistic data and the models of language change employed by these models are seen to await further development.
Resumo:
This study aims to examine the operations and significance of the Klemetti Institute (Klemetti-Opisto) as a developer of Finnish music culture from 1953 to 1968 during the term of office of the Institute s founder and first director, Arvo Vainio. The Klemetti Institute was originally established as a choir institute, but soon expanded to offer a wide range of music courses. In addition to providing courses for choir leaders and singers, the Institute began its orchestral activities as early as the mid-1950s. Other courses included ear training seminars as well as courses for young people s music instructors and in playing the kantele (a Finnish string instrument) and solo singing. More than 20 types of courses were offered over the 16-year period. The Klemetti Institute s courses were incorporated into the folk high school courses offered by the Orivesi Institute (Oriveden Opisto) and were organised during the summer months of June and July. In addition to funding based on the Folk High School Act, financial assistance was obtained from various foundations and funds, such as the Wihuri Foundation. This study is linked to the context of historical research. I examine the Klemetti Institute s operations chronologically, classifying instruction into different course types, and analyse concert activities primarily in the section on the Institute s student union. The source material includes the Klemetti Institute archives, which consist of Arvo Vainio s correspondence, student applications, register books and cards, journals and student lists, course albums and nearly all issues of the Klemettiläinen bulletin. In addition, I have used focused interviews and essays to obtain extensive data from students and teachers. I concentrate on primary school teachers, who accounted for the majority of course participants. A total of more than 2,300 people participated in the courses, nearly half of whom took courses during at least two summers. Primary school teachers accounted for 50% to 70% of the participants in most courses and constituted an even larger share of participants in some courses, such as the music instructor course. The Klemetti Institute contributed to the expansion throughout Finland of a new ideal for choral tone. This involved delicate singing which strives for tonal purity and expressiveness. Chamber choirs had been virtually unheard of in Finland, but the Klemetti Institute Chamber Choir popularised them. Chamber choirs are characterised by an extensive singing repertoire ranging from the Middle Ages to the present. As the name suggests, chamber choirs were originally rather small mixed choirs. Delicate singing meant the avoidance of extensive vibrato techniques and strong, heavy forte sounds, which had previously been typical of Finnish choirs. Those opposing and shunning this new manner of singing called it ghost singing . The Klemetti Institute s teachers included Finland s most prominent pedagogues and artists. As the focused essays, or reminiscences as I call them, show, their significance for the students was central. I examine extensively the Klemetti Institute s enthusiastic atmosphere, which during the early years was characterised by what some writers described as a hunger for music . In addition to distributing a new tonal ideal and choir repertoire, the Klemetti Institute also distributed new methods of music education, thus affecting the music teaching of Finnish primary schools, in particular. The Orff approach, which included various instruments, became well known, although some of Orff s ideas, such as improvisation and physical exercise, were initially unfamiliar. More important than the Orff approach was the in-depth teaching at the Klemetti Institute of the Hungarian ear training method known as the Kodály method. Many course participants were among those launching specialist music classes in schools, and the method became the foundation for music teaching in many such schools. The Klemetti Institute was also a pioneer in organising orchestra camps for young people. The Klemetti Institute promoted Finnish music culture and played an important role in the continuing music education of primary school teachers. Keywords: adult education, Grundtvigian philosophy, popular enlightenment, Klemetti Institute, Kodály method, choir singing, choir conducting, music history, music education, music culture, music camp, Orff approach, Orff-Schulwerk, Orivesi Institute, instrument teaching, free popular education, communality, solo singing, voice production
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
In this thesis, two separate single nucleotide polymorphism (SNP) genotyping techniques were set up at the Finnish Genome Center, pooled genotyping was evaluated as a screening method for large-scale association studies, and finally, the former approaches were used to identify genetic factors predisposing to two distinct complex diseases by utilizing large epidemiological cohorts and also taking environmental factors into account. The first genotyping platform was based on traditional but improved restriction-fragment-length-polymorphism (RFLP) utilizing 384-microtiter well plates, multiplexing, small reaction volumes (5 µl), and automated genotype calling. We participated in the development of the second genotyping method, based on single nucleotide primer extension (SNuPeTM by Amersham Biosciences), by carrying out the alpha- and beta tests for the chemistry and the allele-calling software. Both techniques proved to be accurate, reliable, and suitable for projects with thousands of samples and tens of markers. Pooled genotyping (genotyping of pooled instead of individual DNA samples) was evaluated with Sequenom s MassArray MALDI-TOF, in addition to SNuPeTM and PCR-RFLP techniques. We used MassArray mainly as a point of comparison, because it is known to be well suited for pooled genotyping. All three methods were shown to be accurate, the standard deviations between measurements being 0.017 for the MassArray, 0.022 for the PCR-RFLP, and 0.026 for the SNuPeTM. The largest source of error in the process of pooled genotyping was shown to be the volumetric error, i.e., the preparation of pools. We also demonstrated that it would have been possible to narrow down the genetic locus underlying congenital chloride diarrhea (CLD), an autosomal recessive disorder, by using the pooling technique instead of genotyping individual samples. Although the approach seems to be well suited for traditional case-control studies, it is difficult to apply if any kind of stratification based on environmental factors is needed. Therefore we chose to continue with individual genotyping in the following association studies. Samples in the two separate large epidemiological cohorts were genotyped with the PCR-RFLP and SNuPeTM techniques. The first of these association studies concerned various pregnancy complications among 100,000 consecutive pregnancies in Finland, of which we genotyped 2292 patients and controls, in addition to a population sample of 644 blood donors, with 7 polymorphisms in the potentially thrombotic genes. In this thesis, the analysis of a sub-study of pregnancy-related venous thromboses was included. We showed that the impact of factor V Leiden polymorphism on pregnancy-related venous thrombosis, but not the other tested polymorphisms, was fairly large (odds ratio 11.6; 95% CI 3.6-33.6), and increased multiplicatively when combined with other risk factors such as obesity or advanced age. Owing to our study design, we were also able to estimate the risks at the population level. The second epidemiological cohort was the Helsinki Birth Cohort of men and women who were born during 1924-1933 in Helsinki. The aim was to identify genetic factors that might modify the well known link between small birth size and adult metabolic diseases, such as type 2 diabetes and impaired glucose tolerance. Among ~500 individuals with detailed birth measurements and current metabolic profile, we found that an insertion/deletion polymorphism of the angiotensin converting enzyme (ACE) gene was associated with the duration of gestation, and weight and length at birth. Interestingly, the ACE insertion allele was also associated with higher indices of insulin secretion (p=0.0004) in adult life, but only among individuals who were born small (those among the lowest third of birth weight). Likewise, low birth weight was associated with higher indices of insulin secretion (p=0.003), but only among carriers of the ACE insertion allele. The association with birth measurements was also found with a common haplotype of the glucocorticoid receptor (GR) gene. Furthermore, the association between short length at birth and adult impaired glucose tolerance was confined to carriers of this haplotype (p=0.007). These associations exemplify the interaction between environmental factors and genotype, which, possibly due to altered gene expression, predisposes to complex metabolic diseases. Indeed, we showed that the common GR gene haplotype associated with reduced mRNA expression in thymus of three individuals (p=0.0002).
Resumo:
Soils represent a remarkable stock of carbon, and forest soils are estimated to hold half of the global stock of soil carbon. Topical concern about the effects of climate change and forest management on soil carbon as well as practical reporting requirements set by climate conventions have created a need to assess soil carbon stock changes reliably and transparently. The large spatial variability of soil carbon commensurate with relatively slow changes in stocks hinders the assessment of soil carbon stocks and their changes by direct measurements. Models therefore widely serve to estimate carbon stocks and stock changes in soils. This dissertation aimed to develop the soil carbon model YASSO for upland forest soils. The model was aimed to take into account the most important processes controlling the decomposition in soils, yet remain simple enough to ensure its practical applicability in different applications. The model structure and assumptions were presented and the model parameters were defined with empirical measurements. The model was evaluated by studying the sensitivities of the model results to parameter values, by estimating the precision of the results with an uncertainty analysis, and by assessing the accuracy of the model by comparing the predictions against measured data and to the results of an alternative model. The model was applied to study the effects of intensified biomass extraction on the forest carbon balance and to estimate the effects of soil carbon deficit on net greenhouse gas emissions of energy use of forest residues. The model was also applied in an inventory based method to assess the national scale forest carbon balance for Finland’s forests from 1922 to 2004. YASSO managed to describe sufficiently the effects of both the variable litter and climatic conditions on decomposition. When combined with the stand models or other systems providing litter information, the dynamic approach of the model proved to be powerful for estimating changes in soil carbon stocks on different scales. The climate dependency of the model, the effects of nitrogen on decomposition and forest growth as well as the effects of soil texture on soil carbon stock dynamics are areas for development when considering the applicability of the model to different research questions, different land use types and wider geographic regions. Intensified biomass extraction affects soil carbon stocks, and these changes in stocks should be taken into account when considering the net effects of forest residue utilisation as energy. On a national scale, soil carbon stocks play an important role in forest carbon balances.
Resumo:
The purpose of this study was to extend understanding of how large firms pursuing sustained and profitable growth manage organisational renewal. A multiple-case study was conducted in 27 North American and European wood-industry companies, of which 11 were chosen for closer study. The study combined the organisational-capabilities approach to strategic management with corporate-entrepreneurship thinking. It charted the further development of an identification and classification system for capabilities comprising three dimensions: (i) the dynamism between firm-specific and industry-significant capabilities, (ii) hierarchies of capabilities and capability portfolios, and (iii) their internal structure. Capability building was analysed in the context of the organisational design, the technological systems and the type of resource-bundling process (creating new vs. entrenching existing capabilities). The thesis describes the current capability portfolios and the organisational changes in the case companies. It also clarifies the mechanisms through which companies can influence the balance between knowledge search and the efficiency of knowledge transfer and integration in their daily business activities, and consequently the diversity of their capability portfolio and the breadth and novelty of their product/service range. The largest wood-industry companies of today must develop a seemingly dual strategic focus: they have to combine leading-edge, innovative solutions with cost-efficient, large-scale production. The use of modern technology in production was no longer a primary source of competitiveness in the case companies, but rather belonged to the portfolio of basic capabilities. Knowledge and information management had become an industry imperative, on a par with cost effectiveness. Yet, during the period of this research, the case companies were better in supporting growth in volume of the existing activity than growth through new economic activities. Customer-driven, incremental innovation was preferred over firm-driven innovation through experimentation. The three main constraints on organisational renewal were the lack of slack resources, the aim for lean, centralised designs, and the inward-bound communication climate.