15 resultados para Process model
em Helda - Digital Repository of University of Helsinki
Resumo:
The ProFacil model is a generic process model defined as a framework model showing the links between the facilities management process and the building end user’s business process. The purpose of using the model is to support more detailed process modelling. The model has been developed using the IDEF0 modelling method. The ProFacil model describes business activities from the generalized point of view as management-, support-, and core processes and their relations. The model defines basic activities in the provision of a facility. Examples of these activities are “operate facilities”, “provide new facilities”, “provide re-build facilities”, “provide maintained facilities” and “perform dispose of facilities”. These are all generic activities providing a basis for a further specialisation of company specific FM activities and their tasks. A facilitator can establish a specialized process model using the ProFacil model and interacting with company experts to describe their company’s specific processes. These modelling seminars or interviews will be done in an informal way, supported by the high-level process model as a common reference.
Resumo:
A model of the information and material activities that comprise the overall construction process is presented, using the SADT activity modelling methodology. The basic model is further refined into a number of generic information handling activities such as creation of new information, information search and retrieval, information distribution and person-to-person communication. The viewpoint could be described as information logistics. This model is then combined with a more traditional building process model, consisting of phases such as design and construction. The resulting two-dimensional matrix can be used for positioning different types of generic IT-tools or construction specific applications. The model can thus provide a starting point for a discussion of the application of information and communication technology in construction and for measurements of the impacts of IT on the overall process and its related costs.
Resumo:
There has been a demand for uniform CAD standards in the construction industry ever since the large-scale introduction of computer aided design systems in the late 1980s. While some standards have been widely adopted without much formal effort, other standards have failed to gain support even though considerable resources have been allocated for the purpose. Establishing a standard concerning building information modeling has been one particularly active area of industry development and scientific interest within recent years. In this paper, four different standards are discussed as cases: the IGES and DXF/DWG standards for representing the graphics in 2D drawings, the ISO 13567 standard for the structuring of building information on layers, and the IFC standard for building product models. Based on a literature study combined with two qualitative interview studies with domain experts, a process model is proposed to describe and interpret the contrasting histories of past CAD standardisation processes.
Resumo:
Aerosols impact the planet and our daily lives through various effects, perhaps most notably those related to their climatic and health-related consequences. While there are several primary particle sources, secondary new particle formation from precursor vapors is also known to be a frequent, global phenomenon. Nevertheless, the formation mechanism of new particles, as well as the vapors participating in the process, remain a mystery. This thesis consists of studies on new particle formation specifically from the point of view of numerical modeling. A dependence of formation rate of 3 nm particles on the sulphuric acid concentration to the power of 1-2 has been observed. This suggests nucleation mechanism to be of first or second order with respect to the sulphuric acid concentration, in other words the mechanisms based on activation or kinetic collision of clusters. However, model studies have had difficulties in replicating the small exponents observed in nature. The work done in this thesis indicates that the exponents may be lowered by the participation of a co-condensing (and potentially nucleating) low-volatility organic vapor, or by increasing the assumed size of the critical clusters. On the other hand, the presented new and more accurate method for determining the exponent indicates high diurnal variability. Additionally, these studies included several semi-empirical nucleation rate parameterizations as well as a detailed investigation of the analysis used to determine the apparent particle formation rate. Due to their high proportion of the earth's surface area, oceans could potentially prove to be climatically significant sources of secondary particles. In the lack of marine observation data, new particle formation events in a coastal region were parameterized and studied. Since the formation mechanism is believed to be similar, the new parameterization was applied in a marine scenario. The work showed that marine CCN production is feasible in the presence of additional vapors contributing to particle growth. Finally, a new method to estimate concentrations of condensing organics was developed. The algorithm utilizes a Markov chain Monte Carlo method to determine the required combination of vapor concentrations by comparing a measured particle size distribution with one from an aerosol dynamics process model. The evaluation indicated excellent agreement against model data, and initial results with field data appear sound as well.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
Action, Power and Experience in Organizational Change - A Study of Three Major Corporations This study explores change management and resistance to change as social activities and power displays through worker experiences in three major Finnish corporations. Two important sensitizing concepts were applied. Firstly, Richard Sennett's perspective on work in the new form of capitalism, and its shortcomings - the lack of commitment and freedom accompanied by the disruption to lifelong career planning and the feeling of job insecurity - offered a fruitful starting point for a critical study. Secondly, Michel Foucault's classical concept of power, treated as anecdotal, interactive and nonmeasurable, provided tools for analyzing change-enabling and resisting acts. The study bridges the gap between management and social sciences. The former have usually concentrated on leadership issues, best practices and goal attainment, while the latter have covered worker experiences, power relations and political conflicts. The study was motivated by three research questions. Firstly, why people resist or support changes in their work, work environment or organization, and the kind of analyses these behavioural choices are based on. Secondly, the kind of practical forms which support for, and resistance to change take, and how people choose the different ways of acting. Thirdly, how the people involved experience and describe their own subject position and actions in changing environments. The examination focuses on practical interpretations and action descriptions given by the members of three major Finnish business organizations. The empirical data was collected during a two-year period in the Finnish Post Corporation, the Finnish branch of Vattenfal Group, one of the leading European energy companies, and the Mehiläinen Group, the leading private medical service provider in Finland. It includes 154 non-structured thematic interviews and 309 biographies concentrating on personal experiences of change. All positions and organizational levels were represented. The analysis was conducted using the grounded theory method introduced by Straus and Corbin in three sequential phases, including open, axial and selective coding processes. As a result, there is a hierarchical structure of categories, which is summarized in the process model of change behaviour patterns. Key ingredients are past experiences and future expectations which lead to different change relations and behavioural roles. Ultimately, they contribute to strategic and tactical choices realized as both public and hidden forms of action. The same forms of action can be used in both supporting and resisting change, and there are no specific dividing lines either between employer and employee roles or between different hierarchical positions. In general, however, it is possible to conclude that strategic choices lead more often to public forms of action, whereas tactical choices result in hidden forms. The primary goal of the study was to provide knowledge which has practical applications in everyday business life, HR and change management. The results, therefore, are highly applicable to other organizations as well as to less change-dominated situations, whenever power relations and conflicting interests are present. A sociological thesis on classical business management issues can be of considerable value in revealing the crucial social processes behind behavioural patterns. Keywords: change management, organizational development, organizational resistance, resistance to change, change management, labor relations, organization, leadership
Resumo:
Marja Heinonen s dissertation Verkkomedian käyttö ja tutkiminen. Iltalehti Online 1995-2001 describes the usage of new internet based news service Iltalehti Online during its first years of existence, 1995-2001. The study focuses on the content of the service and users attitudes towards the new media and its contents. Heinonen has also analyzed and described the research methods that can be used in the research of any new media phenomenon when there is no historical perspective to do the research. Heinonen has created a process model for the research of net medium, which is based on a multidimensional approach. She has chosen an iterative research method inspired by Sudweeks and Simoff s CEDA-methodology in which qualitative and quantitative methods take turns both creating results and new research questions. The dissertation discusses and describes the possibilities of combining several research methods in the study of online news media. On general level it discusses the methodological possibilities of researching a completely new media form when there is no historical perspective. The result of these discussions is in favour for the multidimensional methods. The empiric research was built around three cases of Iltalehti Online among its users: log analysis 1996-1999, interviews 1999 and clustering 2000-2001. Even though the results of different cases were somewhat conflicting here are the central results from the analysis of Iltalehti Online 1995-2001: - Reading was strongly determined by the gender. - The structure of Iltalehti Online guided the reading strongly. - People did not make a clear distinction in content between news and entertainment. - Users created new habits in their everyday life during the first years of using Iltalehti Online. These habits were categorized as follows: - break between everyday routines - established habit - new practice within the rhythm of the day - In the clustering of the users sports, culture and celebrities were the most distinguishing contents. Users did not move across these borders as much as within them. The dissertation gives contribution to the development of multidimensional research methods in the field of emerging phenomena in media field. It is also a unique description of a phase of development in media history through an unique research material. There is no such information (logs + demographics) available of any other Finnish online news media. Either from the first years or today.
Resumo:
In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
The thesis studies the translation process for the laws of Finland as they are translated from Finnish into Swedish. The focus is on revision practices, norms and workplace procedures. The translation process studied covers three institutions and four revisions. In three separate studies the translation process is analyzed from the perspective of the translations, the institutions and the actors. The general theoretical framework is Descriptive Translation Studies. For the analysis of revisions made in versions of the Swedish translation of Finnish laws, a model is developed covering five grammatical categories (textual revisions, syntactic revisions, lexical revisions, morphological revisions and content revisions) and four norms (legal adequacy, correct translation, correct language and readability). A separate questionnaire-based study was carried out with translators and revisers at the three institutions. The results show that the number of revisions does not decrease during the translation process, and no division of labour can be seen at the different stages. This is somewhat surprising if the revision process is regarded as one of quality control. Instead, all revisers make revisions on every level of the text. Further, the revisions do not necessarily imply errors in the translations but are often the result of revisers following different norms for legal translation. The informal structure of the institutions and its impact on communication, visibility and workplace practices was studied from the perspective of organization theory. The results show weaknesses in the communicative situation, which affect the co-operation both between institutions and individuals. Individual attitudes towards norms and their relative authority also vary, in the sense that revisers largely prioritize legal adequacy whereas translators give linguistic norms a higher value. Further, multi-professional teamwork in the institutions studied shows a kind of teamwork based on individuals and institutions doing specific tasks with only little contact with others. This shows that the established definitions of teamwork, with people co-working in close contact with each other, cannot directly be applied to the workplace procedures in the translation process studied. Three new concepts are introduced: flerstegsrevidering (multi-stage revision), revideringskedja (revision chain) and normsyn (norm attitude). The study seeks to make a contribution to our knowledge of legal translation, translation processes, institutional translation, revision practices and translation norms for legal translation. Keywords: legal translation, translation of laws, institutional translation, revision, revision practices, norms, teamwork, organizational informal structure, translation process, translation sociology, multilingual.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
Colorectal cancer is among the major cancers and one of the leading causes of cancer-related deaths in Western societies. Its occurrence is strongly affected by environmental factors such as diet. Thus, for preventative strategies it is vitally important to understand the mechanisms that stimulate adenoma growth and development towards accelerated malignancy or, in contrast, attenuate them to remain in quiescence for periods as long as decades. The main objective of this study was to investigate whether diet is able to modulate β-catenin signalling related to the promotion or prevention of intestinal tumourigenesis in an animal model of colon cancer, the Min/+ mouse. A series of dietary experiments with Min/+ mice were performed where fructo-oligosaccharide inulin was used for tumour promotion and four berries, bilberry (Vaccinium myrtillus), lingonberry (Vaccinium vitis-idaea), cloudberry (Rubus chamaemorus) and white currant (Ribes x pallidum), were used for tumour prevention. The adenomas (Apc-/-) and surrounding normal-appearing mucosa (Apc+/-) were investigated separately due to their mutational and functional differences. Tumour promotive and preventive diets had opposite effects on β-catenin signalling in the adenomas that was related to the different adenoma growth effects of dietary inulin and berries. The levels of nuclear β-catenin and cyclin D1 combined with size of the adenomas in the treatment groups suggests that diets induced differences in the cancerous process. Adenomas progressing to malignant carcinomas are most likely found in the sub-groups having the highest levels of β-catenin. On the other hand, adenomas staying quiescent for a long period of time are most probably found in the cloudberry or white currant diet groups. The levels of membranous E-cadherin and β-catenin increased as the adenomas in the inulin diet group grew, which could be a result of the overall increase in the protein levels of the cell. Therefore, the increasing levels of membranous β-catenin in Min/+ mice adenomas would be undesirable, due to the simultaneous increase in oncogenic nuclear β-catenin. We propose that the decreased amount of membranous β-catenin in benign adenomas of berry groups also means a decrease in the nuclear pool of β-catenin. Tumour promotion, but not the tumour prevention, influenced β-catenin signalling already in the normal appearing mucosa. Inulin-induced tumour promotion was related to β-catenin signalling in Min/+ mice, and in WT mice changes were also visible. The preventative effects of berries in the initiation phase were not mediated by β-catenin signalling. Our results suggest that, in addition to the number, size, and growth rate of adenomatous polyps, the signalling pattern of the adenomas should be considered when evaluating preventative dietary strategies.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Wound healing is a complex process that requires an interplay between several cell types. Classically, fibroblasts have been viewed as producers of extracellular matrix, but more recently they have been recognized as orchestrators of the healing response, promoting and directing, inflammation and neovascularization processes. Compared to those from healthy tissue, inflammation-associated fibroblasts display a dramatically altered phenotype and have been described as sentinel cells, able to switch to an immunoregulatory profile on cue. However, the activation mechanism still remains largely uncharacterized. Nemosis is a model for stromal fibroblast activation. When normal human primary fibroblasts are deprived of growth support they cluster, forming multicellular spheroids. Clustering results in upregulation of proinflammatory markers such as cyclooxygenase-2 and secretion of prostaglandins, proteinases, cytokines, and growth factors. Fibroblasts in nemosis induce wound healing and tumorigenic responses in many cell types found in inflammatory and tumor microenvironments. This study investigated the effect of nemotic fibroblasts on two components of the vascular system, leukocytes and endothelium, and characterized the inflammation-promoting responses that arose in these cell types. Fibroblasts in nemosis were found to secrete an array of chemotactic cytokines and attract leukocytes, as well as promote their adhesion to the endothelium. Nuclear factor-kB, the master regulator of many inflammatory responses, is activated in nemotic fibroblasts. Nemotic fibroblasts are known to produce large amounts of hepatocyte growth factor, a motogenic and angiogenic factor. Also, as shown in this study, they produce vascular endothelial growth factor. These two factors induced migratory and sprouting responses in endothelial cells, both required for neovascularization. Nemotic fibroblasts also caused a decrease in the expression of adherens and tight junction components on the surface of endothelial cells. The results allow the conclusion that fibroblasts in nemosis share many similarities with inflammation-associated fibroblasts. Both inflammation and stromal fibroblasts are known to be involved in tumorigenesis and tumor progression. Nemosis may be viewed as a model for stromal fibroblast activation, or it may correlate with cell-cell interactions between adjacent fibroblasts in vivo. Nevertheless, due to nemosis-derived production of proinflammatory cytokines and growth factors, fibroblast nemosis may have therapeutic potential as an inducer of controlled tissue repair. Knowledge of stromal fibroblast activation gained through studies of nemosis, could provide new strategies to control unwanted inflammation and tumor progression.
Resumo:
Lignin is a hydrophobic polymer that is synthesised in the secondary cell walls of all vascular plants. It enables water conduction through the stem, supports the upright growth habit and protects against invading pathogens. In addition, lignin hinders the utilisation of the cellulosic cell walls of plants in pulp and paper industry and as forage. Lignin precursors are synthesised in the cytoplasm through the phenylpropanoid pathway, transported into the cell wall and oxidised by peroxidases or laccases to phenoxy radicals that couple to form the lignin polymer. This study was conducted to characterise the lignin biosynthetic pathway in Norway spruce (Picea abies (L.) Karst.). We focused on the less well-known polymerisation stage, to identify the enzymes and the regulatory mechanisms that are involved. Available data for lignin biosynthesis in gymnosperms is scarce and, for example, the latest improvements in precursor biosynthesis have only been verified in herbaceous plants. Therefore, we also wanted to study in detail the roles of individual gene family members during developmental and stress-induced lignification, using EST sequencing and real-time RT-PCR. We used, as a model, a Norway spruce tissue culture line that produces extracellular lignin into the culture medium, and showed that lignin polymerisation in the tissue culture depends on peroxidase activity. We identified in the culture medium a significant NADH oxidase activity that could generate H2O2 for peroxidases. Two basic culture medium peroxidases were shown to have high affinity to coniferyl alcohol. Conservation of the putative substrate-binding amino acids was observed when the spruce peroxidase sequences were compared with other peroxidases with high affinity to coniferyl alcohol. We also used different peroxidase fractions to produce synthetic in vitro lignins from coniferyl alcohol; however, the linkage pattern of the suspension culture lignin could not be reproduced in vitro with the purified peroxidases, nor with the full complement of culture medium proteins. This emphasised the importance of the precursor radical concentration in the reaction zone, which is controlled by the cells through the secretion of both the lignin precursors and the oxidative enzymes to the apoplast. In addition, we identified basic peroxidases that were reversibly bound to the lignin precipitate. They could be involved, for example, in the oxidation of polymeric lignin, which is required for polymer growth. The dibenzodioxocin substructure was used as a marker for polymer oxidation in the in vitro polymerisation studies, as it is a typical substructure in wood lignin and in the suspension culture lignin. Using immunolocalisation, we found the structure mainly in the S2+S3 layers of the secondary cell walls of Norway spruce tracheids. The structure was primarily formed during the late phases of lignification. Contrary to the earlier assumptions, it appears to be a terminal structure in the lignin macromolecule. Most lignin biosynthetic enzymes are encoded for by several genes, all of which may not participate in lignin biosynthesis. In order to identify the gene family members that are responsible for developmental lignification, ESTs were sequenced from the lignin-forming tissue culture and developing xylem of spruce. Expression of the identified lignin biosynthetic genes was studied using real-time RT-PCR. Candidate genes for developmental lignification were identified by a coordinated, high expression of certain genes within the gene families in all lignin-forming tissues. However, such coordinated expression was not found for peroxidase genes. We also studied stress-induced lignification either during compression wood formation by bending the stems or after Heterobasidion annosum infection. Based on gene expression profiles, stress-induced monolignol biosynthesis appeared similar to the developmental process, and only single PAL and C3H genes were specifically up-regulated by stress. On the contrary, the up-regulated peroxidase genes differed between developmental and stress-induced lignification, indicating specific responses.