16 resultados para process data


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research question of this thesis was how knowledge can be managed with information systems. Information systems can support but not replace knowledge management. Systems can mainly store epistemic organisational knowledge included in content, and process data and information. Certain value can be achieved by adding communication technology to systems. All communication, however, can not be managed. A new layer between communication and manageable information was named as knowformation. Knowledge management literature was surveyed, together with information species from philosophy, physics, communication theory, and information system science. Positivism, post-positivism, and critical theory were studied, but knowformation in extended organisational memory seemed to be socially constructed. A memory management model of an extended enterprise (M3.exe) and knowformation concept were findings from iterative case studies, covering data, information and knowledge management systems. The cases varied from groups towards extended organisation. Systems were investigated, and administrators, users (knowledge workers) and managers interviewed. The model building required alternative sets of data, information and knowledge, instead of using the traditional pyramid. Also the explicit-tacit dichotomy was reconsidered. As human knowledge is the final aim of all data and information in the systems, the distinction between management of information vs. management of people was harmonised. Information systems were classified as the core of organisational memory. The content of the systems is in practice between communication and presentation. Firstly, the epistemic criterion of knowledge is not required neither in the knowledge management literature, nor from the content of the systems. Secondly, systems deal mostly with containers, and the knowledge management literature with applied knowledge. Also the construction of reality based on the system content and communication supports the knowformation concept. Knowformation belongs to memory management model of an extended enterprise (M3.exe) that is divided into horizontal and vertical key dimensions. Vertically, processes deal with content that can be managed, whereas communication can be supported, mainly by infrastructure. Horizontally, the right hand side of the model contains systems, and the left hand side content, which should be independent from each other. A strategy based on the model was defined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the thesis it is discussed in what ways concepts and methodology developed in evolutionary biology can be applied to the explanation and research of language change. The parallel nature of the mechanisms of biological evolution and language change is explored along with the history of the exchange of ideas between these two disciplines. Against this background computational methods developed in evolutionary biology are taken into consideration in terms of their applicability to the study of historical relationships between languages. Different phylogenetic methods are explained in common terminology, avoiding the technical language of statistics. The thesis is on one hand a synthesis of earlier scientific discussion, and on the other an attempt to map out the problems of earlier approaches in addition to finding new guidelines in the study of language change on their basis. Primarily literature about the connections between evolutionary biology and language change, along with research articles describing applications of phylogenetic methods into language change have been used as source material. The thesis starts out by describing the initial development of the disciplines of evolutionary biology and historical linguistics, a process which right from the beginning can be seen to have involved an exchange of ideas concerning the mechanisms of language change and biological evolution. The historical discussion lays the foundation for the handling of the generalised account of selection developed during the recent few decades. This account is aimed for creating a theoretical framework capable of explaining both biological evolution and cultural change as selection processes acting on self-replicating entities. This thesis focusses on the capacity of the generalised account of selection to describe language change as a process of this kind. In biology, the mechanisms of evolution are seen to form populations of genetically related organisms through time. One of the central questions explored in this thesis is whether selection theory makes it possible to picture languages are forming populations of a similar kind, and what a perspective like this can offer to the understanding of language in general. In historical linguistics, the comparative method and other, complementing methods have been traditionally used to study the development of languages from a common ancestral language. Computational, quantitative methods have not become widely used as part of the central methodology of historical linguistics. After the fading of a limited popularity enjoyed by the lexicostatistical method since the 1950s, only in the recent years have also the computational methods of phylogenetic inference used in evolutionary biology been applied to the study of early language history. In this thesis the possibilities offered by the traditional methodology of historical linguistics and the new phylogenetic methods are compared. The methods are approached through the ways in which they have been applied to the Indo-European languages, which is the most thoroughly investigated language family using both the traditional and the phylogenetic methods. The problems of these applications along with the optimal form of the linguistic data used in these methods are explored in the thesis. The mechanisms of biological evolution are seen in the thesis as parallel in a limited sense to the mechanisms of language change, however sufficiently so that the development of a generalised account of selection is deemed as possibly fruiful for understanding language change. These similarities are also seen to support the validity of using phylogenetic methods in the study of language history, although the use of linguistic data and the models of language change employed by these models are seen to await further development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation examines Roman provincial administration and the phenomenon of territorial reorganisations of provinces during the Imperial period with special emphasis on the provinces of Arabia and Palaestina during the Later Roman period, i.e., from Diocletian (r. 284 305) to the accession of Phocas (602), in the light of imperial decision-making. Provinces were the basic unit of Roman rule, for centuries the only level of administration that existed between the emperor and the cities of the Empire. The significance of the territorial reorganisations that the provinces were subjected to during the Imperial period is thus of special interest. The approach to the phenomenon is threefold: firstly, attention is paid to the nature and constraints of the Roman system of provincial administration. Secondly, the phenomenon of territorial reorganisations is analysed on the macro-scale, and thirdly, a case study concerning the reorganisations of the provinces of Arabia and Palaestina is conducted. The study of the mechanisms of decision-making provides a foundation through which the collected data of all known major territorial reorganisations is interpreted. The data concerning reorganisations is also subjected to qualitative comparative analysis that provides a new perspective to the data in the form of statistical analysis that is sensitive to the complexities of individual cases. This analysis of imperial decision-making is based on a timeframe stretching from Augustus (r. 30 BC AD 14) to the accession of Phocas (602). The study identifies five distinct phases in the use of territorial reorganisations of the provinces. From Diocletian s reign there is a clear normative change that made territorial reorganisations a regular tool of administration for the decision-making elite for addressing a wide variety of qualitatively different concerns. From the beginning of the fifth century the use of territorial reorganisations rapidly diminishes. The two primary reasons for the decline in the use of reorganisations were the solidification of ecclesiastical power and interests connected to the extent of provinces, and the decline of the dioceses. The case study of Palaestina and Arabia identifies seven different territorial reorganisations from Diocletian to Phocas. Their existence not only testifies to wider imperial policies, but also shows sensitivity to local conditions and corresponds with the general picture of provincial reorganisations. The territorial reorganisations of the provinces reflect the proactive control of the Roman decision-making elite. The importance of reorganisations should be recognised more clearly as part of the normal imperial administration of the provinces and especially reflecting the functioning of dioceses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The industry foundation classes (IFC) file format is one of the most complex and ambitious IT standardization projects currently being undertaken in any industry, focusing on the development of an open and neutral standard for exchanging building model data. Scientific literature related to the IFC standard has dominantly been technical so far; research looking at the IFC standard from an industry standardization per- spective could offer valuable new knowledge for both theory and practice. This paper proposes the use of IT standardization and IT adoption theories, supported by studies done within construction IT, to lay a theoretical foundation for further empirical analysis of the standardization process of the IFC file format.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major changes that have been witnessed in today's workplaces are challenging the mental well-being of employed people. Stress and burnout are considered to be modern epidemics, and their importance to physical health and work ability has been acknowledged world-wide. The aim of the thesis was to study the concept of burnout as a process proceeding from its antecedents, through the development of the syndrome, and to its outcomes. Several work-related factors considered antecedents of burnout were studied in different occupational groups. The syndrome of burnout is seen as consisting of three dimensions - exhaustion, cynicism and lack of professional efficacy - and different alternatives for the sequential development of these dimensions were tested. Furthermore, several indicators of the severely detrimental health and work ability outcomes of burnout were investigated in a longitudinal study design. The research questions were as follows. 1) Is burnout, as measured with the Maslach Burnout Inventory - General Survey (MBI-GS), a three-dimensional construct and how invariant is the factorial structure across occupations (Finnish) and national samples (Finnish, Swedish and Dutch)? How persistent is exhaustion over time? 2) What is the sequential process of burnout? Is it similar across occupations? How do work stressors relate to the process? 3) How does burnout relate to severe health consequences as well as temporary and chronic work disability according to hospitalization periods, sick-leave episodes and receiving disability pensions? The data were collected between 1986 and 2005. The population of the study consisted of respondents to a company-wide questionnaire survey carried out in 1996-1997 (N=9705, response rate 63%). The participants comprised 6025 blue-collar workers and 3680 white-collar workers. The majority were men (N=7494) and the average age was 43.7 years. In addition, a sample from the population had responded to a questionnaire survey in 1988, which was combined with the 1996 data to form panel data on 713 respondents. The register-based data were collected between 1986 and 2005 from 1) the company's occupational health services' records for a sample of respondents from the 1996 questionnaire survey (sick-leave data), 2) hospitalization records from the Hospital discharge register, and 3) disability pension records from the Finnish Centre for Pensions. These data were combined person by person with the 1996 questionnaire survey data with the help of personal identification numbers which were saved with the study numbers by the researchers. The results showed that burnout consists of three separate but correlating symptoms: exhaustion, cynicism and lack of professional efficacy. As a syndrome, burnout was strongly related to job stressors at work, and seemed to develop from exhaustion through cynicism to lack of professional efficacy in a similar manner among white-collar and blue-collar employees. The results also showed that exhaustion persisted even after eight years of follow-up but did not predict cynicism or lack of professional efficacy after that amount of time. Nor were job stressors longitudinally related to burnout. Longitudinal results were obtained for the severe health-related consequences of burnout. The investigated outcomes represented different phases of health deterioration ranging from sick-leaves and hospitalization periods to receiving work disability pensions. The results showed that burnout syndrome, and its elements of exhaustion and cynicism, were related to future mental and cardiovascular disorders as indicated by hospitalization periods. Burnout was also related to future sick-leave periods due to mental, cardiovascular and musculoskeletal disorders. Of the separate elements, exhaustion was related to the same three categories of disorder, cynicism to mental, musculoskeletal and digestive disorders, and lack of professional efficacy to mental and musculoskeletal disorders. Burnout also predicted receiving disability pensions due to mental and musculoskeletal disorders among initially healthy subjects. Exhaustion was related to receiving disability pensions even when self-reported chronic illness was taken into account. The results suggest that burnout is a multidimensional, chronic, work-related syndrome, which may have serious consequences for health and work ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accompanying collective research report is the result of the research project in 1986­90 between The Finnish Academy and the former Soviet Academy of Sciences. The project was organized around common field work in Finland and in the former Soviet Union and theoretical analyses of tree growth determining processes. Based on theoretical analyses, dynamic stand growth models were made and their parameters were determined utilizing the field results. Annual cycle affects the tree growth. Our theoretical approach was based on adaptation to local climate conditions from Lapland to South Russia. The initiation of growth was described as a simple low and high temperature accumulation driven model. Linking the theoretical model with long term temperature data allowed us to analyze what type of temperature response produced favorable outcome in different climates. Initiation of growth consumes the carbohydrate reserves in plants. We measured the dynamics of insoluble and soluble sugars in the very northern and Karelian conditions. Clear cyclical pattern was observed but the differences between locations were surprisingly small. Analysis of field measurements of CO2 exchange showed that irradiance is the dominating factor causing variation in photosynthetic rate in natural conditions during summer. The effect of other factors is so small that they can be omitted without any considerable loss of accuracy. A special experiment carried out in Hyytiälä showed that the needle living space, defined as the ratio between the shoot cylindric volume and needle surface area, correlates with the shoot photosynthesis. The penetration of irradiance into Scots pine canopy is a complicated phenomenon because of the movement of the sun on the sky and the complicated structure of branches and needles. A moderately simple but balanced forest radiation regime submodel was constructed. It consists of the tree crown and forest structure, the gap probability calculation and the consideration of spatial and temporal variation of radiation inside the forest. The common field excursions in different geographical regions resulted in a lot of experimental data of regularities of woody structures. The water transport seems to be a good common factor to analyse these properties of tree structure. There are evident regressions between cross-sectional areas measured at different locations along the water pathway from fine roots to needles. The observed regressions have clear geographical trends. For example, the same cross-sectional area can support three times higher needle mass in South Russia than in Lapland. Geographical trends can also be seen in shoot and needle structure. Analysis of data published by several Russian authors show, that one ton of needles transpire 42 ton of water a year. This annual amount of transpiration seems to be independent of geographical location, year and site conditions. The produced theoretical and experimental material is utilised in the development of stand growth model that describes the growth and development of Scots pine stands in Finland and the former Soviet Union. The core of the model is carbon and nutrient balances. This means that carbon obtained in photosynthesis is consumed for growth and maintenance and nutrients are taken according to the metabolic needs. The annual photosynthetic production by trees in the stand is determined as a function of irradiance and shading during the active period. The utilisation of the annual photosynthetic production to the growth of different components of trees is based on structural regularities. Since the fundamental metabolic processes are the same in all locations the same growth model structure can be applied in the large range of Scots pine. The annual photosynthetic production and structural regularities determining the allocation of resources have geographical features. The common field measurements enable the application of the model to the analysis of growth and development of stands growing on the five locations of experiments. The model enables the analysis of geographical differences in the growth of Scots pine. For example, the annual photosynthetic production of a 100-year-old stand at Voronez is 3.5 times higher than in Lapland. The share consumed to needle growth (30 %) and to growth of branches (5 %) seems to be the same in all locations. In contrast, the share of fine roots is decreasing when moving from north to south. It is 20 % in Lapland, 15 % in Hyytiälä Central Finland and Kentjärvi Karelia and 15 % in Voronez South Russia. The stem masses (115­113 ton/ha) are rather similar in Hyytiälä, Kentjärvi and Voronez, but rather low (50 ton/ha) in Lapland. In Voronez the height of the trees reach 29 m being in Hyytiälä and Kentjärvi 22 m and in Lapland only 14 m. The present approach enables utilization of structural and functional knowledge, gained in places of intensive research, in the analysis of growth and development of any stand. This opens new possibilities for growth research and also for applications in forestry practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although previous research has recognised adaptation as a central aspect in relationships, the adaptation of the sales process to the buying process has not been studied. Furthermore, the linking of relationship orientation as mindset with adaptation as a strategy and forming the means has not been elaborated upon in previous research. Adaptation in the context of relationships has mostly been studied in relationship marketing. In sales and sales management research, adaptation has been studied with reference to personal selling. This study focuses on adaptation of the sales process to strategically match it to the buyer’s mindset and buying process. The purpose of this study is to develop a framework for strategic adaptation of the seller’s sales process to match the buyer’s buying process in a business-to-business context to make sales processes more relationship oriented. In order to arrive at a holistic view of adaptation of the sales process during relationship initiation, both the seller and buyer are included in an extensive case analysed in the study. However, the selected perspective is primarily that of the seller, and the level focused on is that of the sales process. The epistemological perspective adopted is constructivism. The study is a qualitative one applying a retrospective case study, where the main sources of information are in-depth semi-structured interviews with key informants representing the counterparts at the seller and the buyer in the software development and telecommunications industries. The main theoretical contributions of this research involve targeting a new area in the crossroads of relationship marketing, sales and sales management, and buying and purchasing by studying adaptation in a business-to-business context from a new perspective. Primarily, this study contributes to research in sales and sales management with reference to relationship orientation and strategic sales process adaptation. This research fills three research gaps. Firstly, linking the relationship orientation mindset with adaptation as strategy. Secondly, extending adaptation in sales from adaptation in selling to strategic adaptation of the sales process. Thirdly, extending adaptation to include facilitation of adaptation. The approach applied in the study, systematic combining, is characterised by continuously moving back and forth between theory and empirical data. The framework that emerges, in which linking mindset with strategy with mindset and means forms a central aspect, includes three layers: purchasing portfolio, seller-buyer relationship orientation, and strategic sales process adaptation. Linking the three layers enables an analysis of where sales process adaptation can make a contribution. Furthermore, implications for managerial use are demonstrated, for example how sellers can avoid the ‘trap’ of ad-hoc adaptation. This includes involving the company, embracing the buyer’s purchasing portfolio, understanding the current position that the seller has in this portfolio, and possibly educating the buyer about advantages of adopting a relationship-oriented approach.