11 resultados para Business Value Two-Layer Model

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ERP system implementations have evolved so rapidly that now they represent a must-have within industries. ERP systems are viewed as the cost of doing business. Yet, the research that adopted the resource-based view on the business value of ERP systems concludes that companies may gain competitive advantage when they successfully manage their ERP projects, when they carefully reengineer the organization and when they use the system in line with the organizational strategies. This thesis contributes to the literature on ERP business value by examining key drivers of ERP business value in organizations. The first research paper investigates how ERP systems with different degrees of system functionality are correlated with the development of the business performance after the completion of the ERP projects. The companies with a better perceived system functionality obtained efficiency benefits in the first two years of post-implementation. However, in the third year there is no significant difference in efficiency benefits between successfully and less successfully managed ERP projects. The second research paper examines what business process changes occur in companies implementing ERP for different motivations and how these changes impact the business performance. The findings show that companies reported process changes mainly in terms of workflow changes. In addition, the companies having a business-led motivation focused more on observing average costs of each increase in the input unit. Companies having a technological-led motivation focused more on the benefits coming from the fit of the system with the organizational processes. The third research paper considers the role of alignment between ERP and business strategies for the realization of business value from ERP use. These findings show that strategic alignment and business process changes are significantly correlated with the perceived benefits of ERP at three levels: internal efficiency, customers and financial. Overall, by combining quantitative and qualitative research methods, this thesis puts forward a model that illustrates how successfully managed ERP projects, aligned with the business strategy, have automate and informate effects on processes that ultimately improve the customer service and reduce the companies’ costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image co­ordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A diffusion/replacement model for new consumer durables designed to be used as a long-term forecasting tool is developed. The model simulates new demand as well as replacement demand over time. The model is called DEMSIM and is built upon a counteractive adoption model specifying the basic forces affecting the adoption behaviour of individual consumers. These forces are the promoting forces and the resisting forces. The promoting forces are further divided into internal and external influences. These influences are operationalized within a multi-segmental diffusion model generating the adoption behaviour of the consumers in each segment as an expected value. This diffusion model is combined with a replacement model built upon the same segmental structure as the diffusion model. This model generates, in turn, the expected replacement behaviour in each segment. To be able to use DEMSIM as a forecasting tool in early stages of a diffusion process estimates of the model parameters are needed as soon as possible after product launch. However, traditional statistical techniques are not very helpful in estimating such parameters in early stages of a diffusion process. To enable early parameter calibration an optimization algorithm is developed by which the main parameters of the diffusion model can be estimated on the basis of very few sales observations. The optimization is carried out in iterative simulation runs. Empirical validations using the optimization algorithm reveal that the diffusion model performs well in early long-term sales forecasts, especially as it comes to the timing of future sales peaks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reciprocal development of the object and subject of learning. The renewal of the learning practices of front-line communities in a telecommunications company as part of the techno-economical paradigm change. Current changes in production have been seen as an indication of a shift from the techno-economical paradigm of a mass-production era to a new paradigm of the information and communication technological era. The rise of knowledge management in the late 1990s can be seen as one aspect of this paradigm shift, as knowledge creation and customer responsiveness were recognized as the prime factors in business competition. However, paradoxical conceptions concerning learning and agency have been presented in the discussion of knowledge management. One prevalent notion in the literature is that learning is based on individuals’ voluntary actions and this has now become incompatible with the growing interest in knowledge-management systems. Furthermore, commonly held view of learning as a general process that is independent of the object of learning contradicts the observation that the current need for new knowledge and new competences are caused by ongoing techno-economic changes. Even though the current view acknowledges that individuals and communities have key roles in knowledge creation, this conception defies the idea of the individuals’ and communities’ agency in developing the practices through which they learn. This research therefore presents a new theoretical interpretation of learning and agency based on Cultural-Historical Activity Theory. This approach overcomes the paradoxes in knowledge-management theory and offers means for understanding and analyzing changes in the ways of learning within work communities. This research is also an evaluation of the Competence-Laboratory method which was developed as part of the study as a special application of Developmental Work Research methodology. The research data comprises the videotaped competence-laboratory processes of four front-line work communities in a telecommunications company. The findings reported in the five articles included in this thesis are based on the analyses of this data. The new theoretical interpretation offered here is based on the assessment that the findings reported in the articles represent one of the front lines of the ongoing historical transformation of work-related learning since the research site represents one of the key industries of the new “knowledge society”. The research can be characterized as elaboration of a hypothesis concerning the development of work related learning. According to the new theoretical interpretation, the object of activity is also the object of distributed learning in work communities. The historical socialization of production has increased the number of actors involved in an activity, which has also increased the number of mutual interdependencies as well as the need for communication. Learning practices and organizational systems of learning are historically developed forms of distributed learning mediated by specific forms of division of labor, specific tools, and specific rules. However, the learning practices of the mass production era become increasingly inadequate to accommodate the conditions in the new economy. This was manifested in the front-line work communities in the research site as an aggravating contradiction between the new objects of learning and the prevailing learning practices. The constituent element of this new theoretical interpretation is the idea of a work community’s learning as part of its collaborative mastery of the developing business activity. The development of the business activity is at the same time a practical and an epistemic object for the community. This kind of changing object cannot be mastered by using learning practices designed for the stable conditions of mass production, because learning has to change along the changes in business. According to the model introduced in this thesis, the transformation of learning proceeds through specific stages: predefined learning tasks are first transformed into learning through re-conceptualizing the object of the activity and of the joint learning and then, as the new object becomes stabilized, into the creation of new kinds of learning practices to master the re-defined object of the activity. This transformation of the form of learning is realized through a stepwise expansion of the work community’s agency. To summarize, the conceptual model developed in this study sets the tool-mediated co-development of the subject and the object of learning as the theoretical starting point for developing new, second-generation knowledge management methods. Key words: knowledge management, learning practice, organizational system of learning, agency

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The educational reform, launched in Finland in 2008, concerns the implementation of the Special Education Strategy (Opetusministeriö 2007) under an improvement initiative called Kelpo. One of the main proposed alterations of the Strategy relates to the support system of comprehensive school pupils. The existed two-level model (general and special support) is to be altered by the new three-level model (general, intensified and special support). There are 233 municipalities involved nationwide in the Kelpo initiative, each of which has a municipal coordinator as a national delegate. The Centre for Educational Assessment [the Centre] at the University of Helsinki, led by Professor Jarkko Hautamäki, carries out the developmental assessment of the initiative’s developmental process. As a part of that assessment the Centre interviewed 151 municipal coordinators in November 2008. This thesis considers the Kelpo initiative from Michael Fullan’s change theory’s aspect. The aim is to identify the change theoretical factors in the speech of the municipal coordinators interviewed by the Centre, and to constitute a view of what the crucial factors in the reform implementation process are. The appearance of the change theoretical factors, in the coordinators’ speech, and the meaning of these appearances are being considered from the change process point of view. The Centre collected the data by interviewing the municipal coordinators (n=151) in small groups of 4-11 people. The interview method was based on Vesala and Rantanen’s (2007) qualitative attitude survey method which was adapted and evolved for the Centre’s developmental assessment by Hilasvuori. The method of the analysis was a qualitative theory-based content analysis, processed using the Atlas.ti software. The theoretical frame of reference was grounded on Fullan’s change theory and the analysis was based on three change theoretical categories: implementation, cooperation and perspectives in the change process. The analysis of the interview data revealed spoken expressions in the coordinators’ speech which were either positively or negatively related to the theoretical categories. On the grounds of these change theoretical relations the existence of the change process was observed. The crucial factors of reform implementation were found, and the conclusion is that the encounter of the new reform-based and already existing strategies in school produces interface challenges. These challenges are particularly confronted in the context of the implementation of the new three-level support model. The interface challenges are classified as follows: conceptual, method-based, action-based and belief-based challenges. Keywords: reform, implementation, change process, Michael Fullan, Kelpo, intensified support, special support

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colorectal cancer (CRC) is one of the most frequent malignancies in Western countries. Inherited factors have been suggested to be involved in 35% of CRCs. The hereditary CRC syndromes explain only ~6% of all CRCs, indicating that a large proportion of the inherited susceptibility is still unexplained. Much of the remaining genetic predisposition for CRC is probably due to undiscovered low-penetrance variations. This study was conducted to identify germline and somatic changes that contribute to CRC predisposition and tumorigenesis. MLH1 and MSH2, that underlie Hereditary non-polyposis colorectal cancer (HNPCC) are considered to be tumor suppressor genes; the first hit is inherited in the germline and somatic inactivation of the wild type allele is required for tumor initiation. In a recent study, frequent loss of the mutant allele in HNPCC tumors was detected and a new model, arguing against the two-hit hypothesis, was proposed for somatic HNPCC tumorigenesis. We tested this hypothesis by conducting LOH analysis on 25 colorectal HNPCC tumors with a known germline mutation in the MLH1 or MSH2 genes. LOH was detected in 56% of the tumors. All the losses targeted the wild type allele supporting the classical two-hit model for HNPCC tumorigenesis. The variants 3020insC, R702W and G908R in NOD2 predispose to Crohn s disease. Contribution of NOD2 to CRC predisposition has been examined in several case-control series, with conflicting results. We have previously shown that 3020insC does not predispose to CRC in Finnish CRC patients. To expand our previous study the variants R702W and G908R were genotyped in a population-based series of 1042 Finnish CRC patients and 508 healthy controls. Association analyses did not show significant evidence for association of the variants with CRC. Single nucleotide polymorphism (SNP) rs6983267 at chromosome 8q24 was the first CRC susceptibility variant identified through genome-wide association studies. To characterize the role of rs6983267 in CRC predisposition in the Finnish population, we genotyped the SNP in the case-control material of 1042 cases and 1012 controls and showed that G allele of rs6983267 is associated with the increased risk of CRC (OR 1.22; P=0.0018). Examination of allelic imbalance in the tumors heterozygous for rs6983267 revealed that copy number increase affected 22% of the tumors and interestingly, it favored the G allele. By utilizing a computer algorithm, Enhancer Element Locator (EEL), an evolutionary conserved regulatory motif containing rs6983267 was identified. The SNP affected the binding site of TCF4, a transcription factor that mediates Wnt signaling in cells, and has proven to be crucial in colorectal neoplasia. The preferential binding of TCF4 to the risk allele G was showed in vitro and in vivo. The element drove lacZ marker gene expression in mouse embryos in a pattern that is consistent with genes regulated by the Wnt signaling pathway. These results suggest that rs6983267 at 8q24 exerts its effect in CRC predisposition by regulating gene expression. The most obvious target gene for the enhancer element is MYC, residing ~335 kb downstream, however further studies are required to establish the transcriptional target(s) of the predicted enhancer element.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Veri-aivoeste suojelee aivoja verenkierron vierasaineilta. Veri-aivoestettä tutkivia in vivo ja in vitro -menetelmiä on raportoitu laajasti kirjallisuudessa. Yhdisteiden farmakokinetiikka aivoissa kuvaavia tietokonemalleja on esitetty vain muutamia. Tässä tutkimuksessa kerättiin kirjallisuudesta aineisto eri in vitro ja in vivo -menetelmillä määritetyistä veri-aivoesteen permeabiliteettikertoimista. Lisäksi tutkimuksessa rakennettiin kaksi veri-aivoesteen farmakokineettista tietokonemallia, mikrodialyysimalli ja efluksimalli. Mikrodialyysimalli on yksinkertainen kahdesta tilasta (verenkierto ja aivot) koostuva farmakokineettinen malli. Mikrodialyysimallilla simuloitiin in vivo määritettyjen parametrien perusteella viiden yhdisteen pitoisuuksia rotan aivoissa ja verenkierrossa. Mallilla ei saatu täsmällisesti in vivo -tilannetta vastaavia pitoisuuskuvaajia johtuen mallin rakenteessa tehdyistä yksinkertaistuksista, kuten aivokudostilan ja kuljetinproteiinien kinetiikan puuttuminen. Efluksimallissa on kolme tilaa, verenkierto, veri-aivoesteen endoteelisolutila ja aivot. Efluksimallilla tutkittiin teoreettisten simulaatioiden avulla veri-aivoesteen luminaalisella membraanilla sijaitsevan aktiivisen efluksiproteiinin ja passiivisen permeaation merkitystä yhdisteen pitoisuuksiin aivojen solunulkoisessa nesteessä. Tutkittava parametri oli vapaan yhdisteen pitoisuuksien suhde aivojen ja verenkierron välillä vakaassa tilassa (Kp,uu). Tuloksissa havaittiin efluksiproteiinin vaikutus pitoisuuksiin Michaelis-Mentenin kinetiikan mukaisesti. Efluksimalli sopii hyvin teoreettisten simulaatioiden tekemiseen. Malliin voidaan lisätä aktiivisia kuljettimia. Teoreettisten simulaatioiden avulla voidaan yhdistää in vitro ja in vivo tutkimuksien tuloksia ja osatekijöitä voidaan tutkia yhdessä simulaatiossa.