917 resultados para ARCH and GARCH Models
Resumo:
Aim. Several software packages (SWP) and models have been released for quantification of myocardial perfusion (MP). Although they all are validated against something, the question remains how well their values agree. The present analysis focused on cross-comparison of three SWP for MP quantification of 13N-ammonia PET studies. Materials & Methods. 48 rest and stress MP 13N-ammonia PET studies of hypertrophic cardiomyopathy (HCM) patients (Sciagrà et al., 2009) were analysed with three SW packages - Carimas, PMOD, and FlowQuant - by three observers blinded to the results of each other. All SWP implement the one-tissue-compartment model (1TCM, DeGrado et al. 1996), and first two - the two-tissue-compartment model (2TCM, Hutchins et al. 1990) as well. Linear mixed model for the repeated measures was fitted to the data. Where appropriate we used Bland-Altman plots as well. The reproducibility was assessed on global, regional and segmental levels. Intraclass correlation coefficients (ICC), differences between the SWPs and between models were obtained. ICC≥0.75 indicated excellent reproducibility, 0.4≤ICC<0.75 indicated fair to good reproducibility, ICC<0.4 - poor reproducibility (Rosner, 2010). Results. When 1TCM MP values were compared, the SW agreement on global and regional levels was excellent, except for Carimas vs. PMOD at RCA: ICC=0.715 and for PMOD vs. FlowQuant at LCX:ICC=0.745 which were good. In segmental analysis in five segments: 7,12,13, 16, and 17 the agreement between all SWP was excellent; in the remaining 12 segments the agreement varied between the compared SWP. Carimas showed excellent agreement with FlowQuant in 13 segments and good in four - 1, 5, 6, 11: 0.687≤ICCs≤0.73; Carimas had excellent agreement with PMOD in 11 segments, good in five_4, 9, 10, 14, 15: 0.682≤ICCs≤0.737, and poor in segment 3: ICC=0.341. PMOD had excellent agreement with FlowQuant in eight segments and substantial-to-good in nine_1, 2, 3, 5, 6,8-11: 0.585≤ICCs≤0.738. Agreement between Carimas and PMOD for 2TCM was good at a global level: ICC=0.745, excellent at LCX (0.780) and RCA (0.774), good at LAD (0.662); agreement was excellent for ten segments, fair-to-substantial for segments 2, 3, 8, 14, 15 (0.431≤ICCs≤0.681), poor for segments 4 (0.384) and 17 (0.278). Conclusions. The three SWP used by different operators to analyse 13N-ammonia PET MP studies provide results that agree well at a global level, regional levels, and mostly well even at a segmental level. Agreement is better for 1TCM. Poor agreement at segments 4 and 17 for 2TCM needs further clarification.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.
Resumo:
Understanding the emplacement and growth of intrusive bodies in terms of mechanism, duration, ther¬mal evolution and rates are fundamental aspects of crustal evolution. Recent studies show that many plutons grow in several Ma by in situ accretion of discrete magma pulses, which constitute small-scale magmatic reservoirs. The residence time of magmas, and hence their capacities to interact and differentiate, are con¬trolled by the local thermal environment. The latter is highly dependant on 1) the emplacement depth, 2) the magmas and country rock composition, 3) the country rock thermal conductivity, 4) the rate of magma injection and 5) the geometry of the intrusion. In shallow level plutons, where magmas solidify quickly, evi¬dence for magma mixing and/or differentiation processes is considered by many authors to be inherited from deeper levels. This work shows however that in-situ differentiation and magma interactions occurred within basaltic and felsic sills at shallow depth (0.3 GPa) in the St-Jean-du-Doigt (SJDD) bimodal intrusion, France. This intrusion emplaced ca. 347 Ma ago (IDTIMS U/Pb on zircon) in the Precambrian crust of the Armori- can massif and preserves remarkable sill-like emplacement processes of bimodal mafic-felsic magmas. Field evidence coupled to high precision zircon U-Pb dating document progressive thermal maturation within the incrementally built ioppolith. Early m-thick mafic sills (eastern part) form the roof of the intrusion and are homogeneous and fine-grained with planar contacts with neighboring felsic sills; within a minimal 0.8 Ma time span, the system gets warmer (western part). Sills are emplaced by under-accretion under the old east¬ern part, interact and mingle. A striking feature of this younger, warmer part is in-situ differentiation of the mafic sills in the top 40 cm of the layer, which suggests liquids survival in the shallow crust. Rheological and thermal models were performed in order to determine the parameters required to allow this observed in- situ differentiation-accumulation processes. Strong constraints such as total emplacement durations (ca. 0.8 Ma, TIMS date) and pluton thickness (1.5 Km, gravity model) allow a quantitative estimation of the various parameters required (injection rates, incubation time,...). The results show that in-situ differentiation may be achieved in less than 10 years at such shallow depth, provided that: (1) The differentiating sills are injected beneath consolidated, yet still warm basalt sills, which act as low conductive insulating screens (eastern part formation in the SJDD intrusion). The latter are emplaced in a very short time (800 years) at high injection rate (0.5 m/y) in order to create a "hot zone" in the shallow crust (incubation time). This implies that nearly 1/3 of the pluton (400m) is emplaced by a subsequent and sustained magmatic activity occurring on a short time scale at the very beginning of the system. (2) Once incubation time is achieved, the calculations show that a small hot zone is created at the base of the sill pile, where new injections stay above their solidus T°C and may interact and differentiate. Extraction of differentiated residual liquids might eventually take place and mix with newly injected magma as documented in active syn-emplacement shear-zones within the "warm" part of the pluton. (3) Finally, the model show that in order to maintain a permanent hot zone at shallow level, injection rate must be of 0.03 m/y with injection of 5m thick basaltic sills eveiy 130yr, imply¬ing formation of a 15 km thick pluton. As this thickness is in contradiction with the one calculated for SJDD (1.5 Km) and exceed much the average thickness observed for many shallow level plutons, I infer that there is no permanent hot zone (or magma chambers) at such shallow level. I rather propose formation of small, ephemeral (10-15yr) reservoirs, which represent only small portions of the final size of the pluton. Thermal calculations show that, in the case of SJDD, 5m thick basaltic sills emplaced every 1500 y, allow formation of such ephemeral reservoirs. The latter are formed by several sills, which are in a mushy state and may interact and differentiate during a short time.The mineralogical, chemical and isotopic data presented in this study suggest a signature intermediate be¬tween E-MORB- and arc-like for the SJDD mafic sills and feeder dykes. The mantle source involved produced hydrated magmas and may be astenosphere modified by "arc-type" components, probably related to a sub¬ducting slab. Combined fluid mobile/immobile trace elements and Sr-Nd isotopes suggest that such subduc¬tion components are mainly fluids derived from altered oceanic crust with minor effect from the subducted sediments. Close match between the SJDD compositions and BABB may point to a continental back-arc setting with little crustal contamination. If so, the SjDD intrusion is a major witness of an extensional tectonic regime during the Early-Carboniferous, linked to the subduction of the Rheno-Hercynian Ocean beneath the Variscan terranes. Also of interest is the unusual association of cogenetic (same isotopic compositions) K-feldspar A- type granite and albite-granite. A-type granites may form by magma mixing between the mafic magma and crustal melts. Alternatively, they might derive from the melting of a biotite-bearing quartz-feldspathic crustal protolith triggered by early mafic injections at low crustal levels. Albite-granite may form by plagioclase cu¬mulate remelting issued from A-type magma differentiation.
Resumo:
The objective of this work was to develop uni- and multivariate models to predict maximum soil shear strength (τmax) under different normal stresses (σn), water contents (U), and soil managements. The study was carried out in a Rhodic Haplustox under Cerrado (control area) and under no-tillage and conventional tillage systems. Undisturbed soil samples were taken in the 0.00-0.05 m layer and subjected to increasing U and σn, in shear strength tests. The uni- and multivariate models - respectively τmax=10(a+bU) and τmax=10(a+bU+cσn) - were significant in all three soil management systems evaluated and they satisfactorily explain the relationship between U, σn, and τmax. The soil under Cerrado has the highest shear strength (τ) estimated with the univariate model, regardless of the soil water content, whereas the soil under conventional tillage shows the highest values with the multivariate model, which were associated to the lowest water contents at the soil consistency limits in this management system.
Resumo:
This paper is a study of the concept of priority and its use together with the notion of hierarchy in academic writing and theoretical models of translation. Hierarchies and priorities can be implicit or explicit, prescribed, suggested or described. The paper starts, chronologically, wtih Nida and Levý’s hierarchical accounts of translation and follows their legacy in scholars as different as Newmark and Gutt. The concept of priorities is hinted at also in didactic models (Nord) as well as in norm-theoretical and accounts of translation (Toury and Chesterman) within Descriptive Translation Studies. All of these authors are analyzed and commented. The paper calls for a more systematic and straightforward account of translational priorities, and proposes a few conceptual tools that stem from this research model, including the concepts of ambition and richness of a translation. Finally, the paper concludes with an adaptation of Lakoff and Johnson’s view of prototypicality and its potential usefulness in research into and the understanding of translation.
Resumo:
A mathematical model of the voltage drop which arises in on-chip power distribution networks is used to compare the maximum voltage drop in the case of different geometric arrangements of the pads supplying power to the chip. These include the square or Manhattan power pad arrangement, which currently predominates, as well as equilateral triangular and hexagonal arrangements. In agreement with the findings in the literature and with physical and SPICE models, the equilateral triangular power pad arrangement is found to minimize the maximum voltage drop. This headline finding is a consequence of relatively simple formulas for the voltage drop, with explicit error bounds, which are established using complex analysis techniques, and elliptic functions in particular.
Resumo:
VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.
Resumo:
The main objective of this thesis was togenerate better filtration technologies for effective production of pure starchproducts, and thereby the optimisation of filtration sequences using created models, as well as the synthesis of the theories of different filtration stages, which were suitable for starches. At first, the structure and the characteristics of the different starch grades are introduced and each starch grade is shown to have special characteristics. These are taken as the basis of the understanding of the differences in the behaviour of the different native starch grades and their modifications in pressure filtration. Next, the pressure filtration process is divided into stages, which are filtration, cake washing, compression dewatering and displacement dewatering. Each stage is considered individually in their own chapters. The order of the different suitable combinations of the process stages are studied, as well as the proper durations and pressures of the stages. The principles of the theory of each stageare reviewed, the methods for monitoring the progress of each stage are presented, and finally, the modelling of them is introduced. The experimental results obtained from the different stages of starch filtration tests are given and the suitability of the theories and models to the starch filtration are shown. Finally, the theories and the models are gathered together and shown, that the analysis of the whole starch pressure filtration process can be performed with the software developed.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
Poly (ADP-ribose) polymerase 1 (PARP-1) is a constitutive enzyme, the major isoform of the PARP family, which is involved in the regulation of DNA repair, cell death, metabolism, and inflammatory responses. Pharmacological inhibitors of PARP provide significant therapeutic benefits in various preclinical disease models associated with tissue injury and inflammation. However, our understanding the role of PARP activation in the pathophysiology of liver inflammation and fibrosis is limited. In this study we investigated the role of PARP-1 in liver inflammation and fibrosis using acute and chronic models of carbon tetrachloride (CCl4 )-induced liver injury and fibrosis, a model of bile duct ligation (BDL)-induced hepatic fibrosis in vivo, and isolated liver-derived cells ex vivo. Pharmacological inhibition of PARP with structurally distinct inhibitors or genetic deletion of PARP-1 markedly attenuated CCl4 -induced hepatocyte death, inflammation, and fibrosis. Interestingly, the chronic CCl4 -induced liver injury was also characterized by mitochondrial dysfunction and dysregulation of numerous genes involved in metabolism. Most of these pathological changes were attenuated by PARP inhibitors. PARP inhibition not only prevented CCl4 -induced chronic liver inflammation and fibrosis, but was also able to reverse these pathological processes. PARP inhibitors also attenuated the development of BDL-induced hepatic fibrosis in mice. In liver biopsies of subjects with alcoholic or hepatitis B-induced cirrhosis, increased nitrative stress and PARP activation was noted. CONCLUSION: The reactive oxygen/nitrogen species-PARP pathway plays a pathogenetic role in the development of liver inflammation, metabolism, and fibrosis. PARP inhibitors are currently in clinical trials for oncological indications, and the current results indicate that liver inflammation and liver fibrosis may be additional clinical indications where PARP inhibition may be of translational potential.
Resumo:
This paper asks a simple question: if humans and their actions co-evolve with hydrological systems (Sivapalan et al., 2012), what is the role of hydrological scientists, who are also humans, within this system? To put it more directly, as traditionally there is a supposed separation of scientists and society, can we maintain this separation as socio-hydrologists studying a socio-hydrological world? This paper argues that we cannot, using four linked sections. The first section draws directly upon the concern of science-technology studies to make a case to the (socio-hydrological) community that we need to be sensitive to constructivist accounts of science in general and socio-hydrology in particular. I review three positions taken by such accounts and apply them to hydrological science, supported with specific examples: (a) the ways in which scientific activities frame socio-hydrological research, such that at least some of the knowledge that we obtain is constructed by precisely what we do; (b) the need to attend to how socio-hydrological knowledge is used in decision-making, as evidence suggests that hydrological knowledge does not flow simply from science into policy; and (c) the observation that those who do not normally label themselves as socio-hydrologists may actually have a profound knowledge of socio-hydrology. The second section provides an empirical basis for considering these three issues by detailing the history of the practice of roughness parameterisation, using parameters like Manning's n, in hydrological and hydraulic models for flood inundation mapping. This history sustains the third section that is a more general consideration of one type of socio-hydrological practice: predictive modelling. I show that as part of a socio-hydrological analysis, hydrological prediction needs to be thought through much more carefully: not only because hydrological prediction exists to help inform decisions that are made about water management; but also because those predictions contain assumptions, the predictions are only correct in so far as those assumptions hold, and for those assumptions to hold, the socio-hydrological system (i.e. the world) has to be shaped so as to include them. Here, I add to the ``normal'' view that ideally our models should represent the world around us, to argue that for our models (and hence our predictions) to be valid, we have to make the world look like our models. Decisions over how the world is modelled may transform the world as much as they represent the world. Thus, socio-hydrological modelling has to become a socially accountable process such that the world is transformed, through the implications of modelling, in a fair and just manner. This leads into the final section of the paper where I consider how socio-hydrological research may be made more socially accountable, in a way that is both sensitive to the constructivist critique (Sect. 1), but which retains the contribution that hydrologists might make to socio-hydrological studies. This includes (1) working with conflict and controversy in hydrological science, rather than trying to eliminate them; (2) using hydrological events to avoid becoming locked into our own frames of explanation and prediction; (3) being empirical and experimental but in a socio-hydrological sense; and (4) co-producing socio-hydrological predictions. I will show how this might be done through a project that specifically developed predictive models for making interventions in river catchments to increase high river flow attenuation. Therein, I found myself becoming detached from my normal disciplinary networks and attached to the co-production of a predictive hydrological model with communities normally excluded from the practice of hydrological science.
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
PURPOSE: To preliminarily test the hypothesis that fluorine 19 ((19)F) magnetic resonance (MR) imaging enables the noninvasive in vivo identification of plaque inflammation in a mouse model of atherosclerosis, with histologic findings as the reference standard. MATERIALS AND METHODS: The animal studies were approved by the local animal ethics committee. Perfluorocarbon (PFC) emulsions were injected intravenously in a mouse model of atherosclerosis (n = 13), after which (19)F and anatomic MR imaging were performed at the level of the thoracic aorta and its branches at 9.4 T. Four of these animals were imaged repeatedly (at 2-14 days) to determine the optimal detection time. Repeated-measures analysis of variance with a Tukey test was applied to determine if there was a significant change in (19)F signal-to-noise ratio (SNR) of the plaques and liver between the time points. Six animals were injected with a PFC emulsion that also contained a fluorophore. As a control against false-positive results, wild-type mice (n = 3) were injected with a PFC emulsion, and atherosclerotic mice were injected with a saline solution (n = 2). The animals were sacrificed after the last MR imaging examination, after which high-spatial-resolution ex vivo MR imaging and bright-field and immunofluorescent histologic examination were performed. RESULTS: (19)F MR signal was detected in vivo in plaques in the aortic arch and its branches. The SNR was found to significantly increase up to day 6 (P < .001), and the SNR of all mice at this time point was 13.4 ± 3.3. The presence of PFC and plaque in the excised vessels was then confirmed both through ex vivo (19)F MR imaging and histologic examination, while no signal was detected in the control animals. Immunofluorescent histologic findings confirmed the presence of PFC in plaque macrophages. CONCLUSION: (19)F MR imaging allows the noninvasive in vivo detection of inflammation in atherosclerotic plaques in a mouse model of atherosclerosis and opens up new avenues for both the early detection of vulnerable atherosclerosis and the elucidation of inflammation mechanisms in atherosclerosis.
Resumo:
Maahantulostrategia asettaa yrityksen kansainvälisiä liiketoimintoja ohjaavat tavoitteet, päämäärät, resurssit ja toiminnan suuntaviivat. Tämä diplomityö käsittelee yrityksen maahantulostrategian elementeistä operaatiomuodon valintaa, hinnoittelua ja jakelua. Työssä rakennetaan teoriakehys elementteihin liittyvien päätösten tutkimiseksi lääketeollisuuden ominaispiirteet huomioiden. Lääketeollisuudella on muihin teollisuudenaloihin verrattuna useita erityispiirteitä, joihin työ perehdyttää. Lisäksi työn olennainen osa on selvittää lääketeollisuuden maakohtaisia säädöksiä ja toimintamalleja. Diplomityö on tehty silmäläkkeitä valmistavalle, kehittävälle ja markkinoivalle Santen Oy:lle, joka suunnittelee toimintansa laajentamista Keski- ja Etelä-Euroopan markkinoille. Tässä laajentumisprosessissa ensimmäisenä kohdemaana on Saksa, jonka markkinoille suuntautuvia toimenpiteitä työ tutkii. Teoriakehyksen, markkinoiden ominaispiirteiden sekä useiden erilaisten analyysien pohjalta työn tavoitteena on antaa operaatiomuotoa, tuotteiden hinnoittelua ja jakelua koskevia suosituksia.
Resumo:
Tämän tutkimuksen tavoitteena oli tutkia langattomien internet palveluiden arvoverkkoa ja liiketoimintamalleja. Tutkimus oli luonteeltaan kvalitatiivinen ja siinä käytettiin strategiana konstruktiivista case-tutkimusta. Esimerkkipalveluna oli Treasure Hunters matkapuhelinpeli. Tutkimus muodostui teoreettisesta ja empiirisestä osasta. Teoriaosassa liitettiin innovaatio, liiketoimintamallit ja arvoverkko käsitteellisesti toisiinsa, sekä luotiin perusta liiketoimintamallien kehittämiselle. Empiirisessä osassa keskityttiin ensin liiketoimintamallien luomiseen kehitettyjen innovaatioiden pohjalta. Lopuksi pyrittiin määrittämään arvoverkko palvelun toteuttamiseksi. Tutkimusmenetelminä käytettiin innovaatiosessiota, haastatteluja ja lomakekyselyä. Tulosten pohjalta muodostettiin useita liiketoimintakonsepteja sekä kuvaus arvoverkon perusmallista langattomille peleille. Loppupäätelmänä todettiin että langattomat palvelut vaativat toteutuakseen useista toimijoista koostuvan arvoverkon.