44 resultados para continuous-time models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Percarboxylic acids are commonly used as disinfection and bleaching agents in textile, paper, and fine chemical industries. All of these applications are based on the oxidative potential of these compounds. In spite of high interest in these chemicals, they are unstable and explosive chemicals, which increase the risk of synthesis processes and transportation. Therefore, the safety criteria in the production process should be considered. Microreactors represent a technology that efficiently utilizes safety advantages resulting from small scale. Therefore, microreactor technology was used in the synthesis of peracetic acid and performic acid. These percarboxylic acids were produced at different temperatures, residence times and catalyst i.e. sulfuric acid concentrations. Both synthesis reactions seemed to be rather fast because with performic acid equilibrium was reached in 4 min at 313 K and with peracetic acid in 10 min at 343 K. In addition, the experimental results were used to study the kinetics of the formation of performic acid and peracetic acid. The advantages of the microreactors in this study were the efficient temperature control even in very exothermic reaction and good mixing due to the short diffusion distances. Therefore, reaction rates were determined with high accuracy. Three different models were considered in order to estimate the kinetic parameters such as reaction rate constants and activation energies. From these three models, the laminar flow model with radial velocity distribution gave most precise parameters. However, sulfuric acid creates many drawbacks in this synthesis process. Therefore, a ´´greener´´ way to use heterogeneous catalyst in the synthesis of performic acid in microreactor was studied. The cation exchange resin, Dowex 50 Wx8, presented very high activity and a long life time in this reaction. In the presence of this catalyst, the equilibrium was reached in 120 second at 313 K which indicates a rather fast reaction. In addition, the safety advantages of microreactors were investigated in this study. Four different conventional methods were used. Production of peracetic acid was used as a test case, and the safety of one conventional batch process was compared with an on-site continuous microprocess. It was found that the conventional methods for the analysis of process safety might not be reliable and adequate for radically novel technology, such as microreactors. This is understandable because the conventional methods are partly based on experience, which is very limited in connection with totally novel technology. Therefore, one checklist-based method was developed to study the safety of intensified and novel processes at the early stage of process development. The checklist was formulated using the concept of layers of protection for a chemical process. The traditional and three intensified processes of hydrogen peroxide synthesis were selected as test cases. With these real cases, it was shown that several positive and negative effects on safety can be detected in process intensification. The general claim that safety is always improved by process intensification was questioned.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glass is a unique material with a long history. Several glass products are used daily in our everyday life, often unnoticed. Glass can be found not only in obvious applications such as tableware, windows, and light bulbs, but also in tennis rackets, windmill turbine blades, optical devices, and medical implants. The glasses used at present as implants are inorganic silica-based melt-derived compositions mainly for hard-tissue repair as bone graft substitute in dentistry and orthopedics. The degree of glass reactivity desired varies according to implantation situation and it is vital that the ion release from any glasses used in medical applications is controlled. Understanding the in vitro dissolution rate of glasses provides a first approximation of their behavior in vivo. Specific studies concerning dissolution properties of bioactive glasses have been relatively scarce and mostly concentrated to static condition studies. The motivation behind this work was to develop a simple and accurate method for quantifying the in vitro dissolution rate of highly different types of glass compositions with interest for future clinical applications. By combining information from various experimental conditions, a better knowledge of glass dissolution and the suitability of different glasses for different medical applications can be obtained. Thus, two traditional and one novel approach were utilized in this thesis to study glass dissolution. The chemical durability of silicate glasses was tested in water and TRIS-buffered solution at static and dynamic conditions. The traditional in vitro testing with a TRISbuffered solution under static conditions works well with bioactive or with readily dissolving glasses, and it is easy to follow the ion dissolution reactions. However, in the buffered solution no marked differences between the more durable glasses were observed. The hydrolytic resistance of the glasses was studied using the standard procedure ISO 719. The relative scale given by the standard failed to provide any relevant information when bioactive glasses were studied. However, the clear differences in the hydrolytic resistance values imply that the method could be used as a rapid test to get an overall idea of the biodegradability of glasses. The standard method combined with the ion concentration and pH measurements gives a better estimate of the hydrolytic resistance because of the high silicon amount released from a glass. A sensitive on-line analysis method utilizing inductively coupled plasma optical emission spectrometer and a flow-through micro-volume pH electrode was developed to study the initial dissolution of biocompatible glasses. This approach was found suitable for compositions within a large range of chemical durability. With this approach, the initial dissolution of all ions could be measured simultaneously and quantitatively, which gave a good overall idea of the initial dissolution rates for the individual ions and the dissolution mechanism. These types of results with glass dissolution were presented for the first time during the course of writing this thesis. Based on the initial dissolution patterns obtained with the novel approach using TRIS, the experimental glasses could be divided into four distinct categories. The initial dissolution patterns of glasses correlated well with the anticipated bioactivity. Moreover, the normalized surface-specific mass loss rates and the different in vivo models and the actual in vivo data correlated well. The results suggest that this type of approach can be used for prescreening the suitability of novel glass compositions for future clinical applications. Furthermore, the results shed light on the possible bioactivity of glasses. An additional goal in this thesis was to gain insight into the phase changes occurring during various heat treatments of glasses with three selected compositions. Engineering-type T-T-T curves for glasses 1-98 and 13-93 were stablished. The information gained is essential in manufacturing amorphous porous implants or for drawing of continuous fibers of the glasses. Although both glasses can be hot worked to amorphous products at carefully controlled conditions, 1-98 showed one magnitude greater nucleation and crystal growth rate than 13-93. Thus, 13-93 is better suited than 1-98 for working processes which require long residence times at high temperatures. It was also shown that amorphous and partially crystalline porous implants can be sintered from bioactive glass S53P4. Surface crystallization of S53P4, forming Na2O∙CaO∙2SiO2, was observed to start at 650°C. The secondary crystals of Na2Ca4(PO4)2SiO4, reported for the first time in this thesis, were detected at higher temperatures, from 850°C to 1000°C. The crystal phases formed affected the dissolution behavior of the implants in simulated body fluid. This study opens up new possibilities for using S53P4 to manufacture various structures, while tailoring their bioactivity by controlling the proportions of the different phases. The results obtained in this thesis give valuable additional information and tools to the state of the art for designing glasses with respect to future clinical applications. With the knowledge gained we can identify different dissolution patters and use this information to improve the tuning of glass compositions. In addition, the novel online analysis approach provides an excellent opportunity to further enhance our knowledge of glass behavior in simulated body conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lyhyet toimitusajat tuovat yrityksille kilpailuetua nopeasti muuttuvassa teollisuusympäristössä. Tämän diplomityön ensisijaisena tavoitteena on löytää kirjallisuuden avulla yrityksen käyttöön soveltuva menetelmä, joka soveltuu systemaattiseen läpimenoaikojen lyhentämiseen. Tärkeää on myös varmistaa valitun menetelmän soveltuvuus kohdeyrityksen ympäristöön. Työn toisena tavoitteena on ymmärtää, että minkälaisella panostuksella yhden päivän läpimenoaika voidaan saavuttaa. Kirjallisuustutkimuksen avulla on valittu tarkoitukseen sopiva toimintamalli. Menetelmä on testattu yhdellä tuotantolinjalla ja saadut tulokset sekä palaute osoittavat, että se näyttäisi soveltuvan kohdeyrityksen käyttöön. Tuotantolinjalle on tehty toimintasuunnitelma yhden päivän toimitusajan saavuttamiseksi vuoden 2013 aikana. Haasteen laajuutta koko kohdeyrityksessä on tutkittu erillisessä ideointisessiossa. Session tulosten perusteella on tehty prioriteettilista, joka antaa käsityksen toimitusajan merkittävän lyhentämisen vaatimuksista. Yleisesti ottaen kysynnän vaihtelun hallinta on suurin haaste, mutta useita ratkaisuvaihtoehtoja tämän hallitsemiseksi on tunnistettu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparison between two competing models of an all mechanical power transmission system is studied by using Dymola –software as the simulation tool. This tool is compared with Matlab/ Simulink –software by using functionality, user-friendliness and price as comparison criteria. In this research we assume that the torque is balanceable and transmission ratios are calculated. Using kinematic connection sketches of the two transmission models, simulation models are built into the Dymola simulation environment. Models of transmission systems are modified according to simulation results to achieve a continuous variable transmission ratio. Simulation results are compared between the two transmission systems. The main features of Dymola and MATLAB/ Simulink are compared. Advantages and disadvantages of the two softwares are analyzed and compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s knowledge intense economy the human capital is a source for competitive advantage for organizations. Continuous learning and sharing the knowledge within the organization are important to enhance and utilize this human capital in order to maximize the productivity. The new generation with different views and expectations of work is coming to work life giving its own characteristics on learning and sharing. Work should offer satisfaction so that the new generation employees would commit to organizations. At the same time organizations have to be able to focus on productivity to survive in the competitive market. The objective of this thesis is to construct a theory based framework of productivity, continuous learning and job satisfaction and further examine this framework and its applications in a global organization operating in process industry. Suggestions for future actions are presented for this case organization. The research is a qualitative case study and the empiric material was gathered by personal interviews concluding 15 employee and one supervisor interview. Results showed that more face to face interaction is needed between employees for learning because much of the knowledge of the process is tacit and so difficult to share in other ways. Offering these sharing possibilities can also impact positively to job satisfaction because they will increase the sense of community among employees which was found to be lacking. New employees demand more feedback to improve their learning and confidence. According to the literature continuous learning and job satisfaction have a relative strong relationship on productivity. The employee’s job description in the case organization has moved towards knowledge work due to continuous automation and expansion of the production process. This emphasizes the importance of continuous learning and means that productivity can be seen also from quality perspective. The normal productivity output in the case organization is stable and by focusing on the quality of work by improving continuous learning and job satisfaction the upsets in production can be handled and prevented more effectively. Continuous learning increases also the free human capital input and utilization of it and this can breed output increasing innovations that can increase productivity in long term. Also job satisfaction can increase productivity output in the end because employees will work more efficiently, not doing only the minimum tasks required. Satisfied employees are also found participating more in learning activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theme of this thesis is context-speci c independence in graphical models. Considering a system of stochastic variables it is often the case that the variables are dependent of each other. This can, for instance, be seen by measuring the covariance between a pair of variables. Using graphical models, it is possible to visualize the dependence structure found in a set of stochastic variables. Using ordinary graphical models, such as Markov networks, Bayesian networks, and Gaussian graphical models, the type of dependencies that can be modeled is limited to marginal and conditional (in)dependencies. The models introduced in this thesis enable the graphical representation of context-speci c independencies, i.e. conditional independencies that hold only in a subset of the outcome space of the conditioning variables. In the articles included in this thesis, we introduce several types of graphical models that can represent context-speci c independencies. Models for both discrete variables and continuous variables are considered. A wide range of properties are examined for the introduced models, including identi ability, robustness, scoring, and optimization. In one article, a predictive classi er which utilizes context-speci c independence models is introduced. This classi er clearly demonstrates the potential bene ts of the introduced models. The purpose of the material included in the thesis prior to the articles is to provide the basic theory needed to understand the articles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli luoda kohdeyritykseen toimintamalli, jonka avulla saadaan osallistettua työntekijät, tiiminvetäjät ja työnjohto tuotannon jatkuvaan parantamiseen sekä parannettua tuotannon takaisinkytkentää tiimitasolla. Tutkimus rajattiin pilottitiimiin sekä tiimissä olevien työpisteiden kautta reititettyihin tuotteisiin. Ennen tutkimuksen aloitusta yrityksellä oli jo olemassa sähköinen aloitejärjestelmä, mutta sen käyttö oli organisaation uudelleen järjestelyjen myötä vähentynyt. Tutkimuksen teoriaosassa tutustuttiin jatkuvan parantamisen kulttuuriin ja työkaluihin. Lisäksi tutustuttiin laadunhallinnan sisältöön, käsitteistöön ja laadunvalvontatyökaluihin sekä tuotannon mittareihin. Teorian pohjalta tutkimuksessa luotiin jatkuvan parantamisen toimintamalli, joka tunnistaa ja eliminoi prosessissa olevaa hukkaa osallistamalla pilottitiimin työntekijöitä hukkakorttien avulla. Lisäksi tutkimuksessa luotiin toimintamalli tuotannon kehitysideoiden raportointiin ja käsittelyyn. Tuotannon takaisinkytkentää kehitettiin luomalla pilottitiimiin tuloskortti sekä perustamalla yritykseen päiväkatsauskäytäntö. Tutkimuksessa suoritettiin myös toimihenkilötason kehitysprojekteja käyttäen apuna teoriassa esiteltyjä malleja ja työkaluja. Tuloksena saatiin toimintamalli, joka tuottaa työntekijämäärään suhteutettuna enemmän kehitysideoita sekä käsittelee ne tehokkaammin kuin sähköinen aloitejärjestelmä. Hukkakorteilla toteutetun hukan raportoinnin kautta tunnistettiin ja raportoitiin seitsemän viikon tarkasteluajanjakson aikana yhteensä 23,6 tuntia hukka-aikaa. Tiimin tuloskortin avulla tiimin työntekijät pystyivät viikkotasolla seuraamaan oman tiiminsä suorituskykyä tavoitearvoihin verrattuna. Tämä näkyi muun muassa tiimin suoritustason nousuna. Kehitysprojektien avulla saatiin parannettua pilottitiimin toiminnan ja tuotteiden laatua. Päiväkatsauskäytännön avulla saatiin osallistettua tiiminvetäjät ongelmaratkaisuun sekä tuotannon suorituskyvyn varmistamiseen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The costs of health care are going up in many countries. In order to provide affordable and effective health care solutions, new technologies and approaches are constantly being developed. In this research, video games are presented as a possible solution to the problem. Video games are fun, and nowadays most people like to spend time on them. In addition, recent studies have pointed out that video games can have notable health benefits. Health games have already been developed, used in practice, and researched. However, the bulk of health game studies have been concerned with the design or the effectiveness of the games; no actual business studies have been conducted on the subject, even though health games often lack commercial success despite their health benefits. This thesis seeks to fill this gap. The specific aim of this thesis is to develop a conceptual business model framework and empirically use it in explorative medical game business model research. In the first stage of this research, a literature review was conducted and the existing literature analyzed and synthesized into a conceptual business model framework consisting of six dimensions. The motivation behind the synthesis is the ongoing ambiguity around the business model concept. In the second stage, 22 semi-structured interviews were conducted with different professionals within the value network for medical games. The business model framework was present in all stages of the empirical research: First, in the data collection stage, the framework acted as a guiding instrument, focusing the interview process. Then, the interviews were coded and analyzed using the framework as a structure. The results were then reported following the structure of the framework. In the results, the interviewees highlighted several important considerations and issues for medical games concerning the six dimensions of the business model framework. Based on the key findings of this research, several key components of business models for medical games were identified and illustrated in a single figure. Furthermore, five notable challenges for business models for medical games were presented, and possible solutions for the challenges were postulated. Theoretically, these findings provide pioneering information on the untouched subject of business models for medical games. Moreover, the conceptual business model framework and its use in the novel context of medical games provide a contribution to the business model literature. Regarding practice, this thesis further accentuates that medical games can offer notable benefits to several stakeholder groups and offers advice to companies seeking to commercialize these games.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solvent extraction of calcium and magnesium impurities from a lithium-rich brine (Ca ~ 2,000 ppm, Mg ~ 50 ppm, Li ~ 30,000 ppm) was investigated using a continuous counter-current solvent extraction mixer-settler set-up. The literature review includes a general review about resources, demands and production methods of Li followed by basics of solvent extraction. Experimental section includes batch experiments for investigation of pH isotherms of three extractants; D2EHPA, Versatic 10 and LIX 984 with concentrations of 0.52, 0.53 and 0.50 M in kerosene respectively. Based on pH isotherms LIX 984 showed no affinity for solvent extraction of Mg and Ca at pH ≤ 8 while D2EHPA and Versatic 10 were effective in extraction of Ca and Mg. Based on constructed pH isotherms, loading isotherms of D2EHPA (at pH 3.5 and 3.9) and Versatic 10 (at pH 7 and 8) were further investigated. Furthermore based on McCabe-Thiele method, two extraction stages and one stripping stage (using HCl acid with concentration of 2 M for Versatic 10 and 3 M for D2EHPA) was practiced in continuous runs. Merits of Versatic 10 in comparison to D2EHPA are higher selectivity for Ca and Mg, faster phase disengagement, no detrimental change in viscosity due to shear amount of metal extraction and lower acidity in stripping. On the other hand D2EHPA has less aqueous solubility and is capable of removing Mg and Ca simultaneously even at higher Ca loading (A/O in continuous runs > 1). In general, shorter residence time (~ 2 min), lower temperature (~23 °C), lower pH values (6.5-7.0 for Versatic 10 and 3.5-3.7 for D2EHPA) and a moderately low A/O value (< 1:1) would cause removal of 100% of Ca and nearly 100% of Mg while keeping Li loss less than 4%, much lower than the conventional precipitation in which 20% of Li is lost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.