51 resultados para LEVEL SET METHODS
Resumo:
Corporate social responsibility (CSR) is no longer only an issue of companies but a concern shared by e.g. the European Union, the International Labour Organization, labour market organizations and many others. This thesis examines what kind of voluntary corporate social responsibility exceeding the minimum level set in the legislation can be expected from the Finnish companies. The research was based on the interviews of some representatives of Finnish companies and of external stakeholders. Earlier Finnish empirical research on the topic has solely analysed the stakeholder thinking and the ethics of the views of the company representatives. The views of the external stakeholders brought ht up a much more versatile perspective on the voluntary corporate social responsibility of the companies. That is the particular surplus value of this research. This research, founded on stakeholder thinking, evaluated what kind of starting points and ideas on responsibility the views of the representatives of the companies and the external stakeholders were based on the voluntary social responsibility. Furthermore, the research also investigated how their views about the corporate social responsibility indicated the benefits achieved on the cooperative actions with different partners - for example companies, communities and public administration. To fulfil the aims of the research, the following questions were used as part tasks in mapping the basic foundations and starting points expressed by the representatives of the companies and the external stakeholders: 1) How do laws, directions concerning social responsibility of companies, and opinions and demands of the stakeholders guide and affect the voluntary corporate social responsibility? 2) How can companies assume voluntary corporate social responsibility in addition to their core functions and without compromising their profitability, and how does, for example, the tightening competition affect the possibility of taking responsibility? 3) What kind of ethic and moral foundations is the corporate social responsibility based on? 4) What kind of roles can companies have in securing and promoting the well-being of citizens in Finland and on the global market as one subsystem of the society? The views on the voluntary corporate social responsibility of nine big companies, one medium-sized company and one small company, all considered responsible pioneer companies, were studied with surveys and half-structured theme interviews between 2003 and 2004. The research proceeded as a theory-bounded study. The empirical material and the previous stakeholder thinking theories (Takala 2000b, Vehkaperä 2003) guided the thesis and worked abductively in interplay with each other during the research process. (Tuomi, Sarajärvi 2002.) The aims and the methods of the research and the themes of the interviews were defined on the basis of that information. The aims of the research were surveyed qualitatively with the strategy of a multiple case study. Representatives from nine big peer companies and nine external stakeholders were interviewed with half-structured themes between 2004 and 2005. The external stakeholders and the peer companies were chosen with the "thinking" of theoretical replication by Yin, according to which the views of the representatives of those groups would differ from those of the pioneer companies and also from those of each others. The multiple case study supports analysing the internal cohesion of the views of different groups and comparing their differences, and it supports theoretical evaluation and theory-building as well. (Yin 2003.) Another reason for choosing the external stakeholders was their known cooperation with companies. The spoken argumentations of the company and stakeholder representatives on the voluntary social responsibility of the companies were analysed and interpreted in the first place with an analytic discourse analysis, and the argumentations were classified allusively into the stakeholder discourses in three of the part tasks. In the discourse analysis, argumentations of the speech is seen to be intervowen with cultural meanings. (Jokinen, Juhila 1999.) The views of the representatives of the pioneer companies and the external stakeholders were more stakeholder-orientated than the views of the representatives of the peer companies. For the most part, the voluntary corporate social responsibility was seemingly targeted on single, small cooperation projects of the companies and external stakeholders. The pioneer companies had more of those projects, and they were participating in the projects more actively than the peer companies were. The significant result in this research was the notion that, in particular, the representatives of the pioneer companies and external stakeholders did not consider employing and paying taxes to be enough of reciprocal corporate social responsibility. However, they still wanted to preserve the Finnish welfare model, and the interviewees did not wish major changes in the present legislation or the social agreements. According to this study, the voluntary corporate social responsibility is motivated by ethical utilitarianism which varied from very narrow to very wide in relation to benefits achieved by companies and stakeholders (Velasquez 2002, Lagerspetz 2004). Compared with the peer companies, more of the representatives of the pioneer companies and of external stakeholders estimated that companies in their decision-making and operations considered not only the advantages and the benefits of the owners and other internal stakeholders, but also those of the external stakeholders and of the whole society. However, all interviewees expressed more or less strongly that the economic responsibility guides the voluntary responsible actions of the companies in the first place. This kind of utilitarian foundation of behaviour appeared from this research was named as business-orientated company moral. This thesis also presents a new voluntary corporate social responsibility model with four variables on the stakeholder discourses and their distinctive characteristics. The utilitarian motivation of a company s behaviour on their operations has been criticized on the grounds that the end justifies the means. It has also been stated that it is impossible to evaluate the benefits of the utilitarian type of actions to the individuals and the society. It is expected however that companies for their part promote the material and immaterial well-being of the individuals on the global, national and local markets. The expectations are so strong that if companies do not take into account the ethical and moral values, they can possibly suffer significant financial losses. All stakeholders, especially consumers, can with their own choices promote the responsible behaviour of the companies. Key words: voluntary corporate social responsibility, external stakeholders, corporate citizenship, ethics and morality, utilitarianism, stakeholder discourses, welfare society, globalisation
Resumo:
Leaf and needle biomasses are key factors in forest health. Insects that feed on needles cause growth losses and tree mortality. Insect outbreaks in Finnish forests have increased rapidly during the last decade and due to climate change the damages are expected to become more serious. There is a need for cost-efficient methods for inventorying these outbreaks. Remote sensing is a promising means for estimating forests and damages. The purpose of this study is to investigate the usability of airborne laser scanning in estimating Scots pine defoliation caused by the common pine sawfly (Diprion pini L.). The study area is situated in Ilomantsi district, eastern Finland. Study materials included high-pulse airborne laser scannings from July and October 2008. Reference data consisted of 90 circular field plots measured in May-June 2009. Defoliation percentage on these field plots was estimated visually. The study was made on plot-level and methods used were linear regression, unsupervised classification, Maximum likelihood method, and stepwise linear regression. Field plots were divided in defoliation classes in two different ways: When divided in two classes the defoliation percentages used were 0–20 % and 20–100 % and when divided in four classes 0–10 %, 10–20 %, 20–30 % and 30–100 %. The results varied depending on method and laser scanning. In the first laser scanning the best results were obtained with stepwise linear regression. The kappa value was 0,47 when using two classes and 0,37 when divided in four classes. In the second laser scanning the best results were obtained with Maximum likelihood. The kappa values were 0,42 and 0,37, correspondingly. The feature that explained defoliation best was vegetation index (pulses reflected from height > 2m / all pulses). There was no significant difference in the results between the two laser scannings so the seasonal change in defoliation could not be detected in this study.
Resumo:
In this thesis, two separate single nucleotide polymorphism (SNP) genotyping techniques were set up at the Finnish Genome Center, pooled genotyping was evaluated as a screening method for large-scale association studies, and finally, the former approaches were used to identify genetic factors predisposing to two distinct complex diseases by utilizing large epidemiological cohorts and also taking environmental factors into account. The first genotyping platform was based on traditional but improved restriction-fragment-length-polymorphism (RFLP) utilizing 384-microtiter well plates, multiplexing, small reaction volumes (5 µl), and automated genotype calling. We participated in the development of the second genotyping method, based on single nucleotide primer extension (SNuPeTM by Amersham Biosciences), by carrying out the alpha- and beta tests for the chemistry and the allele-calling software. Both techniques proved to be accurate, reliable, and suitable for projects with thousands of samples and tens of markers. Pooled genotyping (genotyping of pooled instead of individual DNA samples) was evaluated with Sequenom s MassArray MALDI-TOF, in addition to SNuPeTM and PCR-RFLP techniques. We used MassArray mainly as a point of comparison, because it is known to be well suited for pooled genotyping. All three methods were shown to be accurate, the standard deviations between measurements being 0.017 for the MassArray, 0.022 for the PCR-RFLP, and 0.026 for the SNuPeTM. The largest source of error in the process of pooled genotyping was shown to be the volumetric error, i.e., the preparation of pools. We also demonstrated that it would have been possible to narrow down the genetic locus underlying congenital chloride diarrhea (CLD), an autosomal recessive disorder, by using the pooling technique instead of genotyping individual samples. Although the approach seems to be well suited for traditional case-control studies, it is difficult to apply if any kind of stratification based on environmental factors is needed. Therefore we chose to continue with individual genotyping in the following association studies. Samples in the two separate large epidemiological cohorts were genotyped with the PCR-RFLP and SNuPeTM techniques. The first of these association studies concerned various pregnancy complications among 100,000 consecutive pregnancies in Finland, of which we genotyped 2292 patients and controls, in addition to a population sample of 644 blood donors, with 7 polymorphisms in the potentially thrombotic genes. In this thesis, the analysis of a sub-study of pregnancy-related venous thromboses was included. We showed that the impact of factor V Leiden polymorphism on pregnancy-related venous thrombosis, but not the other tested polymorphisms, was fairly large (odds ratio 11.6; 95% CI 3.6-33.6), and increased multiplicatively when combined with other risk factors such as obesity or advanced age. Owing to our study design, we were also able to estimate the risks at the population level. The second epidemiological cohort was the Helsinki Birth Cohort of men and women who were born during 1924-1933 in Helsinki. The aim was to identify genetic factors that might modify the well known link between small birth size and adult metabolic diseases, such as type 2 diabetes and impaired glucose tolerance. Among ~500 individuals with detailed birth measurements and current metabolic profile, we found that an insertion/deletion polymorphism of the angiotensin converting enzyme (ACE) gene was associated with the duration of gestation, and weight and length at birth. Interestingly, the ACE insertion allele was also associated with higher indices of insulin secretion (p=0.0004) in adult life, but only among individuals who were born small (those among the lowest third of birth weight). Likewise, low birth weight was associated with higher indices of insulin secretion (p=0.003), but only among carriers of the ACE insertion allele. The association with birth measurements was also found with a common haplotype of the glucocorticoid receptor (GR) gene. Furthermore, the association between short length at birth and adult impaired glucose tolerance was confined to carriers of this haplotype (p=0.007). These associations exemplify the interaction between environmental factors and genotype, which, possibly due to altered gene expression, predisposes to complex metabolic diseases. Indeed, we showed that the common GR gene haplotype associated with reduced mRNA expression in thymus of three individuals (p=0.0002).
Resumo:
This thesis presents methods for locating and analyzing cis-regulatory DNA elements involved with the regulation of gene expression in multicellular organisms. The regulation of gene expression is carried out by the combined effort of several transcription factor proteins collectively binding the DNA on the cis-regulatory elements. Only sparse knowledge of the 'genetic code' of these elements exists today. An automatic tool for discovery of putative cis-regulatory elements could help their experimental analysis, which would result in a more detailed view of the cis-regulatory element structure and function. We have developed a computational model for the evolutionary conservation of cis-regulatory elements. The elements are modeled as evolutionarily conserved clusters of sequence-specific transcription factor binding sites. We give an efficient dynamic programming algorithm that locates the putative cis-regulatory elements and scores them according to the conservation model. A notable proportion of the high-scoring DNA sequences show transcriptional enhancer activity in transgenic mouse embryos. The conservation model includes four parameters whose optimal values are estimated with simulated annealing. With good parameter values the model discriminates well between the DNA sequences with evolutionarily conserved cis-regulatory elements and the DNA sequences that have evolved neutrally. In further inquiry, the set of highest scoring putative cis-regulatory elements were found to be sensitive to small variations in the parameter values. The statistical significance of the putative cis-regulatory elements is estimated with the Two Component Extreme Value Distribution. The p-values grade the conservation of the cis-regulatory elements above the neutral expectation. The parameter values for the distribution are estimated by simulating the neutral DNA evolution. The conservation of the transcription factor binding sites can be used in the upstream analysis of regulatory interactions. This approach may provide mechanistic insight to the transcription level data from, e.g., microarray experiments. Here we give a method to predict shared transcriptional regulators for a set of co-expressed genes. The EEL (Enhancer Element Locator) software implements the method for locating putative cis-regulatory elements. The software facilitates both interactive use and distributed batch processing. We have used it to analyze the non-coding regions around all human genes with respect to the orthologous regions in various other species including mouse. The data from these genome-wide analyzes is stored in a relational database which is used in the publicly available web services for upstream analysis and visualization of the putative cis-regulatory elements in the human genome.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This thesis presents ab initio studies of two kinds of physical systems, quantum dots and bosons, using two program packages of which the bosonic one has mainly been developed by the author. The implemented models, \emph{i.e.}, configuration interaction (CI) and coupled cluster (CC) take the correlated motion of the particles into account, and provide a hierarchy of computational schemes, on top of which the exact solution, within the limit of the single-particle basis set, is obtained. The theory underlying the models is presented in some detail, in order to provide insight into the approximations made and the circumstances under which they hold. Some of the computational methods are also highlighted. In the final sections the results are summarized. The CI and CC calculations on multiexciton complexes in self-assembled semiconductor quantum dots are presented and compared, along with radiative and non-radiative transition rates. Full CI calculations on quantum rings and double quantum rings are also presented. In the latter case, experimental and theoretical results from the literature are re-examined and an alternative explanation for the reported photoluminescence spectra is found. The boson program is first applied on a fictitious model system consisting of bosonic electrons in a central Coulomb field for which CI at the singles and doubles level is found to account for almost all of the correlation energy. Finally, the boson program is employed to study Bose-Einstein condensates confined in different anisotropic trap potentials. The effects of the anisotropy on the relative correlation energy is examined, as well as the effect of varying the interaction potential.}
Resumo:
In this dissertation, I present an overall methodological framework for studying linguistic alternations, focusing specifically on lexical variation in denoting a single meaning, that is, synonymy. As the practical example, I employ the synonymous set of the four most common Finnish verbs denoting THINK, namely ajatella, miettiä, pohtia and harkita ‘think, reflect, ponder, consider’. As a continuation to previous work, I describe in considerable detail the extension of statistical methods from dichotomous linguistic settings (e.g., Gries 2003; Bresnan et al. 2007) to polytomous ones, that is, concerning more than two possible alternative outcomes. The applied statistical methods are arranged into a succession of stages with increasing complexity, proceeding from univariate via bivariate to multivariate techniques in the end. As the central multivariate method, I argue for the use of polytomous logistic regression and demonstrate its practical implementation to the studied phenomenon, thus extending the work by Bresnan et al. (2007), who applied simple (binary) logistic regression to a dichotomous structural alternation in English. The results of the various statistical analyses confirm that a wide range of contextual features across different categories are indeed associated with the use and selection of the selected think lexemes; however, a substantial part of these features are not exemplified in current Finnish lexicographical descriptions. The multivariate analysis results indicate that the semantic classifications of syntactic argument types are on the average the most distinctive feature category, followed by overall semantic characterizations of the verb chains, and then syntactic argument types alone, with morphological features pertaining to the verb chain and extra-linguistic features relegated to the last position. In terms of overall performance of the multivariate analysis and modeling, the prediction accuracy seems to reach a ceiling at a Recall rate of roughly two-thirds of the sentences in the research corpus. The analysis of these results suggests a limit to what can be explained and determined within the immediate sentential context and applying the conventional descriptive and analytical apparatus based on currently available linguistic theories and models. The results also support Bresnan’s (2007) and others’ (e.g., Bod et al. 2003) probabilistic view of the relationship between linguistic usage and the underlying linguistic system, in which only a minority of linguistic choices are categorical, given the known context – represented as a feature cluster – that can be analytically grasped and identified. Instead, most contexts exhibit degrees of variation as to their outcomes, resulting in proportionate choices over longer stretches of usage in texts or speech.
Resumo:
In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.
Resumo:
This study concerns the implementation of steering by contracting in health care units and in the work of the doctors employed by them. The study analyses how contracting as a process is being implemented in hospital district units, health centres and in the work of their doctors, as well as how these units carry out their operations and patient care within the restrictions set by the contracts. Based on interviews with doctors, the study analyses the realisation of operations within the units from the doctors perspective and through their work. The key result of the study is that the steering impact of contracting was not felt at the level of practical work. The contracting was implemented by assigning the related tasks to management only. The management implemented the contract by managing their resources rather than by intervening in doctors activities or the content of their tasks. The steering did not extend to improving practical care processes. This allowed the unchanged continuation of core operations in an autonomous manner and in part, protected from the impacts of contracting. In health centres, the contract concluded was viewed as merely steering the operations of the hospital district and its implementation did not receive the support of the centres. The fact that primary health care and specialised health care constitute separate contracting parties had adverse effects on the contract s implementation and the integration of care. A theoretical review unveiled several reasons for the failure of steering by contracting to alter operations within units. These included the perception steering by contracting as a weak change incentive. The doctors shunned the introduction of an economic logic and ideology into health care and viewed steering by contracting as a hindrance to delivering care to patients and a disturbance to their work and patient relationships. Contracting caused tensions between representatives of the financial administration and health care professionals. It also caused internal tensions, while it had varying impacts on different specialities, including the introduction of varying potential to influence contracts. Most factors preventing the realisation of the steering objective could have been ameliorated through positive leadership. There is a need to bridge the gap between financial steering and patient work. Key measures include encouraging the commitment of middle management, supporting leadership expertise and identifying the right methods of contributing to a mutual understanding between the cultures of financing, administration and health care. Criticism of the purchasers expertise and the view that undersized orders are due to the purchaser s financial difficulties underlines the importance of the purchaser s size. Overly detailed, product-based contracts seemed to place the focus on the quantities and costs of services rather than health impacts and efficiency of operations. Bundling contracts into larger service packages would encourage the enhancement of operations. Steering by contracting represents unexploited potential: it could function as a forum for integrated regional planning of services, and the prioritisation and integration of care, and offer an opportunity and an incentive for developing core operations.
Resumo:
Type 1 diabetes is a disease where the insulin-producing beta cells of the pancreas are destroyed by an autoimmune mechanism. The incidence of type 1 diabetes, as well as the incidence of the diabetic kidney complication, diabetic nephropathy, are increasing worldwide. Nephrin is a crucial molecule for the filtration function of the kidney. It localises in the podocyte foot processes partially forming the interpodocyte final sieve of the filtration barrier, the slit diaphragm. The expression of nephrin is altered in diabetic nephropathy. Recently, nephrin was found from the beta cells of the pancreas as well, which makes this molecule interesting in the context of type 1 diabetes and especially in diabetic nephropathy. In this thesis work, the expression of other podocyte molecules in the beta cells of the pancreas, in addition to nephrin, were deciphered. It was also hypothesised that patients with type 1 diabetes may develop autoantibodies against novel beta cell molecules comparably to the formation of autoantibodies to GAD, IA-2 and insulin. The possible association of such novel autoantibodies with the pathogenesis of diabetic nephropathy was also assessed. Furthermore, expression of nephrin in lymphoid tissues has been suggested, and this issue was more thoroughly deciphered here. The expression of nephrin in the human lymphoid tissues, and a set of podocyte molecules in the human, mouse and rat pancreas at the gene and protein level were studied by polymerase chain reaction (PCR) -based methods and immunochemical methods. To detect autoantibodies to novel beta cell molecules, specific radioimmunoprecipitation assays were developed. These assays were used to screen a follow-up material of 66 patients with type 1 diabetes and a patient material of 150 diabetic patients with signs of diabetic nephropathy. Nephrin expression was detected in the lymphoid follicle germinal centres, specifically in the follicular dendritic cells. In addition to the previously reported expression of nephrin in the pancreas, expression of the podocyte molecules, densin, filtrin, FAT and alpha-actinin-4 were detected in the beta cells. Circulating antibodies to nephrin, densin and filtrin were discovered in a subset of patients with type 1 diabetes. However, no association of these autoantibodies with the pathogenesis of diabetic nephropathy was detected. In conclusion, the expression of five podocyte molecules in the beta cells of the pancreas suggests some molecular similarities between the two cell types. The novel autoantibodies against shared molecules of the kidney podocytes and the pancreatic beta cells appear to be part of the common autoimmune mechanism in patients with type 1 diabetes. No data suggested that the autoantibodies would be active participants of the kidney injury detected in diabetic nephropathy.
Variation in tracheid cross-sectional dimensions and wood viscoelasticity extent and control methods
Resumo:
Printing papers have been the main product of the Finnish paper industry. To improve properties and economy of printing papers, controlling of tracheid cross-sectional dimensions and wood viscoelasticity are examined in this study. Controlling is understood as any procedure which yields raw material classes with distinct properties and small internal variation. Tracheid cross-sectional dimensions, i.e., cell wall thickness and radial and tangential diameters can be controlled with methods such as sorting wood into pulpwood and sawmill chips, sorting of logs according to tree social status and fractionation of fibres. These control methods were analysed in this study with simulations, which were based on measured tracheid cross-sectional dimensions. A SilviScan device was used to measure the data set from five Norway spruce (Picea abies) and five Scots pine (Pinus sylvestris) trunks. The simulation results indicate that the sawmill chips and top pulpwood assortments have quite similar cross-sectional dimensions. Norway spruce and Scots pine are on average also relatively similar in their cross-sectional dimensions. The distributions of these species are somewhat different, but from a practical point of view, the differences are probably of minor importance. The controlling of tracheid cross-sectional dimensions can be done most efficiently with methods that can separate fibres into earlywood and latewood. Sorting of logs or partitioning of logs into juvenile and mature wood were markedly less efficient control methods than fractionation of fibres. Wood viscoelasticity affects energy consumption in mechanical pulping, and is thus an interesting control target when improving energy efficiency of the process. A literature study was made to evaluate the possibility of using viscoelasticity in controlling. The study indicates that there is considerable variation in viscoelastic properties within tree species, but unfortunately, the viscoelastic properties of important raw material lots such as top pulpwood or sawmill chips are not known. Viscoelastic properties of wood depend mainly on lignin, but also on microfibrillar angle, width of cellulose crystals and tracheid cross-sectional dimensions.
Resumo:
There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61º50' N, 24º22' E). In particular, a comparison of the results of different estimation methods was conducted to assess the reliability and suitability of their applications. For the trees in Mature-Stand, annual stem biomass increment fluctuated following a sigmoid equation, and the fitting curves reached a maximum level (from about 1 kg/yr for understorey spruce to 7 kg/yr for dominant pine) when the trees were 100 years old. Tree biomass was estimated to be about 70 Mg/ha in Young-Stand and about 220 Mg/ha in Mature-Stand. In the region (58.00-62.13 ºN, 14-34 ºE, ≤ 300 m a.s.l.) surrounding the study stands, the tree biomass accumulation in Norway spruce and Scots pine stands followed a sigmoid equation with stand age, with a maximum of 230 Mg/ha at the age of 140 years. In Mature-Stand, lichen biomass on the trees was 1.63 Mg/ha with more than half of the biomass occurring on dead branches, and the standing crop of litter lichen on the ground was about 0.09 Mg/ha. There were substantial differences among the results estimated by different methods in the stands. These results imply that a possible estimation error should be taken into account when calculating tree biomass in a stand with an indirect approach.
Resumo:
Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.
Resumo:
The aim of this study was to compare the differences between forest management incorporating energy wood thinning and forest management based on silvicultural recommendations (baseline). Energy wood thinning was substituted for young stand thinning and the first commercial thinning of industrial wood. The study was based on the forest stand data from Southern Finland, which were simulated by the MOTTI-simulator. The main interest was to find out the climatic benefits resulting from carbon sequestration and energy substitution. The value of energy wood was set to substitute it for coal as an alternative energy fuel (emission trade). Other political instruments (Kemera subsidies) were also analysed. The largest carbon dioxide emission reductions were achieved as a combination of carbon sequestration and energy substitution (on average, a 26-90 % increase in discounted present value in the beginning of rotation) compared to the baseline. Energy substitution increased emission reductions more effectively than carbon sequestration, when maintaining dense young stands. According to the study, energy wood thinning as a part of forest management was more profitable than the baseline when the value of carbon dioxide averaged more than 15 €/CO2 and other political subsidies were unchanged. Alternatively, the price of energy wood should on average exceed 21 €/m3 on the roadside in order to be profitable in the absence of political instruments. The most cost-efficient employment of energy wood thinning occured when the dominant height was 12 meters, when energy substitution was taken into account. According to alternative forest management, thinning of sapling stands could be done earlier or less intensely than thinning based on silvicultural recommendations and the present criteria of subsidies. Consequently, the first commercial thinning could be profitable to carry out either as harvesting of industrial wood or energy wood, or as integrated harvesting depending on the costs of the harvesting methods available and the price level of small-size industrial wood compared to energy wood.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.