17 resultados para models of computation

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkielma tarkastelee vapaa alue konseptia osana yritysten kansainvälistä toimitusketjua. Tarkoituksena on löytää keinoja, millä tavoin vapaa alueen houkuttelevuutta voidaan lisätä yritysten näkökulmasta ja millaista liiketoimintaa yritysten on vapaa alueella mahdollista harjoittaa. Tutkielmassa etsitään tekijöitä, jotka vaikuttavat vapaa alueen menestykseen ja jotka voisivat olla sovellettavissa Kaakkois-Suomen ja Venäjän raja-alueelle ottaen huomioon vallitsevat olosuhteet ja lainsäädäntö rajoittavina tekijöinä. Menestystekijöitä ja liiketoimintamalleja haetaan tutkimalla ja analysoimalla lyhyesti muutamia olemassa olevia ja toimivia vapaa alueita. EU tullilain harmonisointi ja kansainvälisen kaupan vapautuminen vähentää vapaa alueen perinteistä merkitystä tullivapaana alueena. Sen sijaan vapaa alueet toimivat yhä enenevissä määrin logistisina keskuksina kansainvälisessä kaupassa ja tarjoavat palveluita, joiden avulla yritykset voivat parantaa logistista kilpailukykyään. Verkostoituminen, satelliitti-ratkaisut ja yhteistoiminta ovat keinoja, millä Kaakkois-Suomen alueen eri logistiikkapalvelujen tarjoajat voivat parantaa suorituskykyään ja joustavuutta kansainvälisessä toimitusketjussa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkielman tavoitteena oli tarkastella telekommunikaatiolaitevalmistajien liiketoimintamalleja. Tutkielma jakaantuu teoreettiseen ja empiiriseen osaan. Teoreettinen osa keskittyy lähinnä liiketoimintamallin käsitteen määrittelyyn. Olemassa olevien määritelmien, sekä liiketoimintamallin käsitteeseen läheisesti liittyvien termien, pohjalta luotiin liiketoimintamallille uusi malli. Tutkielman empiirinen osa keskittyy case-yritys Cisco Systemsin liiketoimintamallin määrittelyyn ja kehityksen kuvaamiseen. Liiketoimintamallin kehitystä seurattiin kahden vuoden ajalta perehtymällä lähinnä yrityksen lehdistötiedotteisiin, artikkeleihin ja muuhun julkiseen materiaaliin. Ciscon lisäksi empiirisessä osassa tutkittiin kahdeksan muun laitevalmistajan liiketoimintamallien kehitystä. Empiirisen osan päätavoitteena oli selvittää, miten telekommunikaatiolaitevalmistajien liiketoimintamallit kehittyvät nyt ja tulevaisuudessa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the study was to find a proper frame to understand business models and study business models of the chosen companies in packaging machinery manufacturing. Good practices and tips are searched from business models which have helped companies to success. Packaging industry’s future is also examined in front of different kinds of changes and the influence which they have on machinery manufacturer’s business models. In the theory part business models’ history and the best frame suitable for this study are presented. The chosen case companies have been discussed according to the frame, and they have been compared to each other to point out the differences. The good practices noticed in companies and according to information from other sources, new business model has been constructed including things that should be noticed while constructing a new business model. The information sources of this study where interviews, annual reports, companies presentations and web pages. The type of study was an interpretative case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comparison between two competing models of an all mechanical power transmission system is studied by using Dymola –software as the simulation tool. This tool is compared with Matlab/ Simulink –software by using functionality, user-friendliness and price as comparison criteria. In this research we assume that the torque is balanceable and transmission ratios are calculated. Using kinematic connection sketches of the two transmission models, simulation models are built into the Dymola simulation environment. Models of transmission systems are modified according to simulation results to achieve a continuous variable transmission ratio. Simulation results are compared between the two transmission systems. The main features of Dymola and MATLAB/ Simulink are compared. Advantages and disadvantages of the two softwares are analyzed and compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiostereometric analysis (RSA) is a highly accurate method for the measurement of in vivo micromotion of orthopaedic implants. Validation of the RSA method is a prerequisite for performing clinical RSA studies. Only a limited number of studies have utilised the RSA method in the evaluation of migration and inducible micromotion during fracture healing. Volar plate fixation of distal radial fractures has increased in popularity. There is still very little prospective randomised evidence supporting the use of these implants over other treatments. The aim of this study was to investigate the precision, accuracy, and feasibility of using RSA in the evaluation of healing in distal radius fractures treated with a volar fixed-angle plate. A physical phantom model was used to validate the RSA method for simple distal radius fractures. A computer simulation model was then used to validate the RSA method for more complex interfragmentary motion in intra-articular fractures. A separate pre-clinical investigation was performed in order to evaluate the possibility of using novel resorbable markers for RSA. Based on the validation studies, a prospective RSA cohort study of fifteen patients with plated AO type-C distal radius fractures with a 1-year follow-up was performed. RSA was shown to be highly accurate and precise in the measurement of fracture micromotion using both physical and computer simulated models of distal radius fractures. Resorbable RSA markers demonstrated potential for use in RSA. The RSA method was found to have a high clinical precision. The fractures underwent significant translational and rotational migration during the first two weeks after surgery, but not thereafter. Maximal grip caused significant translational and rotational interfragmentary micromotion. This inducible micromotion was detectable up to eighteen weeks, even after the achievement of radiographic union. The application of RSA in the measurement of fracture fragment migration and inducible interfragmentary micromotion in AO type-C distal radius fractures is feasible but technically demanding. RSA may be a unique tool in defining the progress of fracture union.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atherosclerosis is a life-long vascular inflammatory disease and the leading cause of death in Finland and in other western societies. The development of atherosclerotic plaques is progressive and they form when lipids begin to accumulate in the vessel wall. This accumulation triggers the migration of inflammatory cells that is a hallmark of vascular inflammation. Often, this plaque will become unstable and form vulnerable plaque which may rupture causing thrombosis and in the worst case, causing myocardial infarction or stroke. Identification of these vulnerable plaques before they rupture could save lives. At present, in the clinic, there exists no appropriated, non-invasive method for their identification. The aim of this thesis was to evaluate novel positron emission tomography (PET) probes for the detection of vulnerable atherosclerotic plaques and to characterize, two mouse models of atherosclerosis. These studies were performed by using ex vivo and in vivo imaging modalities. The vulnerability of atherosclerotic plaques was evaluated as expression of active inflammatory cells, namely macrophages. Age and the duration of high-fat diet had a drastic impact on the development of atherosclerotic plaques in mice. In imaging of atherosclerosis, 6-month-old mice, kept on high-fat diet for 4 months, showed matured, metabolically active, atherosclerotic plaques. [18F]FDG and 68Ga were accumulated in the areas representative of vulnerable plaques. However, the slow clearance of 68Ga limits its use for the plaque imaging. The novel synthesized [68Ga]DOTA-RGD and [18F]EF5 tracers demonstrated efficient uptake in plaques as compared to the healthy vessel wall, but the pharmacokinetic properties of these tracers were not optimal in used models. In conclusion, these studies resulted in the identification of new strategies for the assessment of plaque stability and mouse models of atherosclerosis which could be used for plaque imaging. In the used probe panel, [18F]FDG was the best tracer for plaque imaging. However, further studies are warranted to clarify the applicability of [18F]EF5 and [68Ga]DOTA-RGD for imaging of atherosclerosis with other experimental models.