30 resultados para Andersen and Newman model
Resumo:
Calcium oxide looping is a carbon dioxide sequestration technique that utilizes the partially reversible reaction between limestone and carbon dioxide in two interconnected fluidised beds, carbonator and calciner. Flue gases from a combustor are fed into the carbonator where calcium oxide reacts with carbon dioxide within the gases at a temperature of 650 ºC. Calcium oxide is transformed into calcium carbonate which is circulated into the regenerative calciner, where calcium carbonate is returned into calcium oxide and a stream of pure carbon dioxide at a higher temperature of 950 ºC. Calcium oxide looping has proved to have a low impact on the overall process efficiency and would be easily retrofitted into existing power plants. This master’s thesis is done in participation to an EU funded project CaOling as a part of the Lappeenranta University of Technology deliverable, reactor modelling and scale-up tools. Thesis concentrates in creating the first model frame and finding the physically relevant phenomena governing the process.
Resumo:
The aim of this study is to analyse the content of the interdisciplinary conversations in Göttingen between 1949 and 1961. The task is to compare models for describing reality presented by quantum physicists and theologians. Descriptions of reality indifferent disciplines are conditioned by the development of the concept of reality in philosophy, physics and theology. Our basic problem is stated in the question: How is it possible for the intramental image to match the external object?Cartesian knowledge presupposes clear and distinct ideas in the mind prior to observation resulting in a true correspondence between the observed object and the cogitative observing subject. The Kantian synthesis between rationalism and empiricism emphasises an extended character of representation. The human mind is not a passive receiver of external information, but is actively construing intramental representations of external reality in the epistemological process. Heidegger's aim was to reach a more primordial mode of understanding reality than what is possible in the Cartesian Subject-Object distinction. In Heidegger's philosophy, ontology as being-in-the-world is prior to knowledge concerning being. Ontology can be grasped only in the totality of being (Dasein), not only as an object of reflection and perception. According to Bohr, quantum mechanics introduces an irreducible loss in representation, which classically understood is a deficiency in knowledge. The conflicting aspects (particle and wave pictures) in our comprehension of physical reality, cannot be completely accommodated into an entire and coherent model of reality. What Bohr rejects is not realism, but the classical Einsteinian version of it. By the use of complementary descriptions, Bohr tries to save a fundamentally realistic position. The fundamental question in Barthian theology is the problem of God as an object of theological discourse. Dialectics is Barth¿s way to express knowledge of God avoiding a speculative theology and a human-centred religious self-consciousness. In Barthian theology, the human capacity for knowledge, independently of revelation, is insufficient to comprehend the being of God. Our knowledge of God is real knowledge in revelation and our words are made to correspond with the divine reality in an analogy of faith. The point of the Bultmannian demythologising programme was to claim the real existence of God beyond our faculties. We cannot simply define God as a human ideal of existence or a focus of values. The theological programme of Bultmann emphasised the notion that we can talk meaningfully of God only insofar as we have existential experience of his intervention. Common to all these twentieth century philosophical, physical and theological positions, is a form of anti-Cartesianism. Consequently, in regard to their epistemology, they can be labelled antirealist. This common insight also made it possible to find a common meeting point between the different disciplines. In this study, the different standpoints from all three areas and the conversations in Göttingen are analysed in the frameworkof realism/antirealism. One of the first tasks in the Göttingen conversations was to analyse the nature of the likeness between the complementary structures inquantum physics introduced by Niels Bohr and the dialectical forms in the Barthian doctrine of God. The reaction against epistemological Cartesianism, metaphysics of substance and deterministic description of reality was the common point of departure for theologians and physicists in the Göttingen discussions. In his complementarity, Bohr anticipated the crossing of traditional epistemic boundaries and the generalisation of epistemological strategies by introducing interpretative procedures across various disciplines.
Resumo:
The purpose of this paper is to gather enough evidence to speculate the future of Nokia, Rim and Apple. The thesis goes over the history, current events and business model of each company. This paper includes differences between the companies and co-operation and rivalry, such as patent infringement cases. The study is limited to smartphones and their future. The result of this study is that Apple will continue its steady increase in market share, while Nokia will first decrease and after the launch of the Windows Phone it will rise again. RIM‟s result has not been as good as in past years and it has lost market share. The decrease of share price may lead to acquisition by a company interested in RIM technology.
Resumo:
The purpose of this research is to draw up a clear construction of an anticipatory communicative decision-making process and a successful implementation of a Bayesian application that can be used as an anticipatory communicative decision-making support system. This study is a decision-oriented and constructive research project, and it includes examples of simulated situations. As a basis for further methodological discussion about different approaches to management research, in this research, a decision-oriented approach is used, which is based on mathematics and logic, and it is intended to develop problem solving methods. The approach is theoretical and characteristic of normative management science research. Also, the approach of this study is constructive. An essential part of the constructive approach is to tie the problem to its solution with theoretical knowledge. Firstly, the basic definitions and behaviours of an anticipatory management and managerial communication are provided. These descriptions include discussions of the research environment and formed management processes. These issues define and explain the background to further research. Secondly, it is processed to managerial communication and anticipatory decision-making based on preparation, problem solution, and solution search, which are also related to risk management analysis. After that, a solution to the decision-making support application is formed, using four different Bayesian methods, as follows: the Bayesian network, the influence diagram, the qualitative probabilistic network, and the time critical dynamic network. The purpose of the discussion is not to discuss different theories but to explain the theories which are being implemented. Finally, an application of Bayesian networks to the research problem is presented. The usefulness of the prepared model in examining a problem and the represented results of research is shown. The theoretical contribution includes definitions and a model of anticipatory decision-making. The main theoretical contribution of this study has been to develop a process for anticipatory decision-making that includes management with communication, problem-solving, and the improvement of knowledge. The practical contribution includes a Bayesian Decision Support Model, which is based on Bayesian influenced diagrams. The main contributions of this research are two developed processes, one for anticipatory decision-making, and the other to produce a model of a Bayesian network for anticipatory decision-making. In summary, this research contributes to decision-making support by being one of the few publicly available academic descriptions of the anticipatory decision support system, by representing a Bayesian model that is grounded on firm theoretical discussion, by publishing algorithms suitable for decision-making support, and by defining the idea of anticipatory decision-making for a parallel version. Finally, according to the results of research, an analysis of anticipatory management for planned decision-making is presented, which is based on observation of environment, analysis of weak signals, and alternatives to creative problem solving and communication.
Resumo:
Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.
Resumo:
Suomessa sähkönjakelu on säännelty monopoli. Energiamarkkinavirasto tuottaa ohjeistuksen sekä mallin yritysten ansaintamahdollisuuksille. Karkeasti sanottuna tulomalli on sijoitetun pääoman ja pääoman painotetun kustannuksen tulo. Pääoman painotettu kustannus koostuu useista parametreista kuten beta ja vieraan pääoman riskipreemio. Näiden parametrien taso ja määrittämisajankohta perustuvat subjektiivisiin näkemyksiin, kun objektiivista parametrien määrittämismenetelmää tulisi käyttää. Nykyiset beta ja vieraan pääoman riskipreemio perustuvat energiamarkkinaviraston ja asiantuntijoiden lausuntoihin. Aihealuetta on tutkittu erittäin vähän, mikä johtunee pääasiassa siitä, ettei ole olemassa listautuneita puhtaita jakeluverkkoyhtiöitä. Betan nykytaso on 0.529 ja vieraan pääoman riskipreemio on 1.0 %. Tässä pro gradu –työssä määritetään markkinaperusteisesti betan ja vieraan pääoman riskipreemion nykytaso. Tässä työssä esiteltävä määrittämismalli perustuu puhtaasti markkinadataan eikä sen soveltamisessa käytetä subjektiivisia mielipiteitä. Markkinaehtoisia tietoja käyttäen betan pitäisi olla tasolla 0.525 ja vieraan pääoman riskipreemion tasolla 1.34 %. Nämä luvut, mikäli ne otettaisiin käyttöön, vaikuttaisivat suoraan ja positiivisesti jakeluverkkoyhtiöiden sallittuun tuottoon Suomessa.
Resumo:
In this thesis, a model called CFB3D is validated for oxygen combustion in circulating fluidized bed boiler. The first part of the work consists of literature review in which circulating fluidized bed and oxygen combustion technologies are studied. In addition, the modeling of circulating fluidized bed furnaces is discussed and currently available industrial scale three-dimensional furnace models are presented. The main features of CFB3D model are presented along with the theories and equations related to the model parameters used in this work. The second part of this work consists of the actual research and modeling work including measurements, model setup, and modeling results. The objectives of this thesis is to study how well CFB3D model works with oxygen combustion compared to air combustion in circulating fluidized bed boiler and what model parameters need to be adjusted when changing from air to oxygen combustion. The study is performed by modeling two air combustion cases and two oxygen combustion cases with comparable boiler loads. The cases are measured at Ciuden 30 MWth Flexi-Burn demonstration plant in April 2012. The modeled furnace temperatures match with the measurements as well in oxygen combustion cases as in air combustion cases but the modeled gas concentrations differ from the measurements clearly more in oxygen combustion cases. However, the same model parameters are optimal for both air and oxygen combustion cases. When the boiler load is changed, some combustion and heat transfer related model parameters need to be adjusted. To improve the accuracy of modeling results, better flow dynamics model should be developed in the CFB3D model. Additionally, more measurements are needed from the lower furnace to find the best model parameters for each case. The validation work needs to be continued in order to improve the modeling results and model predictability.
Resumo:
This doctoral dissertation investigates the adult education policy of the European Union (EU) in the framework of the Lisbon agenda 2000–2010, with a particular focus on the changes of policy orientation that occurred during this reference decade. The year 2006 can be considered, in fact, a turning point for the EU policy-making in the adult learning sector: a radical shift from a wide--ranging and comprehensive conception of educating adults towards a vocationally oriented understanding of this field and policy area has been observed, in particular in the second half of the so--called ‘Lisbon decade’. In this light, one of the principal objectives of the mainstream policy set by the Lisbon Strategy, that of fostering all forms of participation of adults in lifelong learning paths, appears to have muted its political background and vision in a very short period of time, reflecting an underlying polarisation and progressive transformation of European policy orientations. Hence, by means of content analysis and process tracing, it is shown that the new target of the EU adult education policy, in this framework, has shifted from citizens to workers, and the competence development model, borrowed from the corporate sector, has been established as the reference for the new policy road maps. This study draws on the theory of governance architectures and applies a post-ontological perspective to discuss whether the above trends are intrinsically due to the nature of the Lisbon Strategy, which encompasses education policies, and to what extent supranational actors and phenomena such as globalisation influence the European governance and decision--making. Moreover, it is shown that the way in which the EU is shaping the upgrading of skills and competences of adult learners is modeled around the needs of the ‘knowledge economy’, thus according a great deal of importance to the ‘new skills for new jobs’ and perhaps not enough to life skills in its broader sense which include, for example, social and civic competences: these are actually often promoted but rarely implemented in depth in the EU policy documents. In this framework, it is conveyed how different EU policy areas are intertwined and interrelated with global phenomena, and it is emphasised how far the building of the EU education systems should play a crucial role in the formation of critical thinking, civic competences and skills for a sustainable democratic citizenship, from which a truly cohesive and inclusive society fundamentally depend, and a model of environmental and cosmopolitan adult education is proposed in order to address the challenges of the new millennium. In conclusion, an appraisal of the EU’s public policy, along with some personal thoughts on how progress might be pursued and actualised, is outlined.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
The development of carbon capture and storage (CCS) has raised interest towards novel fluidised bed (FB) energy applications. In these applications, limestone can be utilized for S02 and/or CO2 capture. The conditions in the new applications differ from the traditional atmospheric and pressurised circulating fluidised bed (CFB) combustion conditions in which the limestone is successfully used for SO2 capture. In this work, a detailed physical single particle model with a description of the mass and energy transfer inside the particle for limestone was developed. The novelty of this model was to take into account the simultaneous reactions, changing conditions, and the effect of advection. Especially, the capability to study the cyclic behaviour of limestone on both sides of the calcination-carbonation equilibrium curve is important in the novel conditions. The significances of including advection or assuming diffusion control were studied in calcination. Especially, the effect of advection in calcination reaction in the novel combustion atmosphere was shown. The model was tested against experimental data; sulphur capture was studied in a laboratory reactor in different fluidised bed conditions. Different Conversion levels and sulphation patterns were examined in different atmospheres for one limestone type. The Conversion curves were well predicted with the model, and the mechanisms leading to the Conversion patterns were explained with the model simulations. In this work, it was also evaluated whether the transient environment has an effect on the limestone behaviour compared to the averaged conditions and in which conditions the effect is the largest. The difference between the averaged and transient conditions was notable only in the conditions which were close to the calcination-carbonation equilibrium curve. The results of this study suggest that the development of a simplified particle model requires a proper understanding of physical and chemical processes taking place in the particle during the reactions. The results of the study will be required when analysing complex limestone reaction phenomena or when developing the description of limestone behaviour in comprehensive 3D process models. In order to transfer the experimental observations to furnace conditions, the relevant mechanisms that take place need to be understood before the important ones can be selected for 3D process model. This study revealed the sulphur capture behaviour under transient oxy-fuel conditions, which is important when the oxy-fuel CFB process and process model are developed.
Resumo:
The objective of this Master’s thesis is to develop a model which estimates net working capital (NWC) monthly in a year period. The study is conducted by a constructive research which uses a case study. The estimation model is designed in the need of one case company which operates in project business. Net working capital components should be linked together by an automatic model and estimated individually, including advanced components of NWC for example POC receivables. Net working capital estimation model of this study contains three parts: output template, input template and calculation model. The output template gets estimate values automatically from the input template and the calculation model. Into the input template estimate values of more stable NWC components are inputted manually. The calculate model gets estimate values for major affecting components automatically from the systems of a company by using a historical data and made plans. As a precondition for the functionality of the estimation calculation is that sales are estimated in one year period because the sales are linked to all NWC components.
Resumo:
Coronary artery disease is an atherosclerotic disease, which leads to narrowing of coronary arteries, deteriorated myocardial blood flow and myocardial ischaemia. In acute myocardial infarction, a prolonged period of myocardial ischaemia leads to myocardial necrosis. Necrotic myocardium is replaced with scar tissue. Myocardial infarction results in various changes in cardiac structure and function over time that results in “adverse remodelling”. This remodelling may result in a progressive worsening of cardiac function and development of chronic heart failure. In this thesis, we developed and validated three different large animal models of coronary artery disease, myocardial ischaemia and infarction for translational studies. In the first study the coronary artery disease model had both induced diabetes and hypercholesterolemia. In the second study myocardial ischaemia and infarction were caused by a surgical method and in the third study by catheterisation. For model characterisation, we used non-invasive positron emission tomography (PET) methods for measurement of myocardial perfusion, oxidative metabolism and glucose utilisation. Additionally, cardiac function was measured by echocardiography and computed tomography. To study the metabolic changes that occur during atherosclerosis, a hypercholesterolemic and diabetic model was used with [18F] fluorodeoxyglucose ([18F]FDG) PET-imaging technology. Coronary occlusion models were used to evaluate metabolic and structural changes in the heart and the cardioprotective effects of levosimendan during post-infarction cardiac remodelling. Large animal models were used in testing of novel radiopharmaceuticals for myocardial perfusion imaging. In the coronary artery disease model, we observed atherosclerotic lesions that were associated with focally increased [18F]FDG uptake. In heart failure models, chronic myocardial infarction led to the worsening of systolic function, cardiac remodelling and decreased efficiency of cardiac pumping function. Levosimendan therapy reduced post-infarction myocardial infarct size and improved cardiac function. The novel 68Ga-labeled radiopharmaceuticals tested in this study were not successful for the determination of myocardial blood flow. In conclusion, diabetes and hypercholesterolemia lead to the development of early phase atherosclerotic lesions. Coronary artery occlusion produced considerable myocardial ischaemia and later infarction following myocardial remodelling. The experimental models evaluated in these studies will enable further studies concerning disease mechanisms, new radiopharmaceuticals and interventions in coronary artery disease and heart failure.
Resumo:
Water geochemistry is a very important tool for studying the water quality in a given area. Geology and climate are the major natural factors controlling the chemistry of most natural waters. Anthropogenic impacts are the secondary sources of contamination in natural waters. This study presents the first integrative approach to the geochemistry and water quality of surface waters and Lake Qarun in the Fayoum catchment, Egypt. Moreover, geochemical modeling of Lake Qarun was firstly presented. The Nile River is the main source of water to the Fayoum watershed. To investigate the quality and geochemistry of this water, water samples from irrigation canals, drains and Lake Qarun were collected during the period 2010‒2013 from the whole Fayoum drainage basin to address the major processes and factors governing the evolution of water chemistry in the investigation area. About 34 physicochemical quality parameters, including major ions, oxygen isotopes, trace elements, nutrients and microbiological parameters were investigated in the water samples. Multivariable statistical analysis was used to interpret the interrelationship between the different studied parameters. Geochemical modeling of Lake Qarun was carried out using Hardie and Eugster’s evolutionary model and a model simulated by PHREEQC software. The crystallization sequence during evaporation of Lake Qarun brine was also studied using a Jänecke phase diagram involving the system Na‒K‒Mg‒ Cl‒SO4‒H2O. The results show that the chemistry of surface water in the Fayoum catchment evolves from Ca- Mg-HCO3 at the head waters to Ca‒Mg‒Cl‒SO4 and eventually to Na‒Cl downstream and at Lake Qarun. The main processes behind the high levels of Na, SO4 and Cl in downstream waters and in Lake Qarun are dissolution of evaporites from Fayoum soils followed by evapoconcentration. This was confirmed by binary plots between the different ions, Piper plot, Gibb’s plot and δ18O results. The modeled data proved that Lake Qarun brine evolves from drainage waters via an evaporation‒crystallization process. Through the precipitation of calcite and gypsum, the solution should reach the final composition "Na–Mg–SO4–Cl". As simulated by PHREEQC, further evaporation of lake brine can drive halite to precipitate in the final stages of evaporation. Significantly, the crystallization sequence during evaporation of the lake brine at the concentration ponds of the Egyptian Salts and Minerals Company (EMISAL) reflected the findings from both Hardie and Eugster’s evolutionary model and the PHREEQC simulated model. After crystallization of halite at the EMISAL ponds, the crystallization sequence during evaporation of the residual brine (bittern) was investigated using a Jänecke phase diagram at 35 °C. This diagram was more useful than PHREEQC for predicting the evaporation path especially in the case of this highly concentrated brine (bittern). The predicted crystallization path using a Jänecke phase diagram at 35 °C showed that halite, hexahydrite, kainite and kieserite should appear during bittern evaporation. Yet the actual crystallized mineral salts were only halite and hexahydrite. The absence of kainite was due to its metastability while the absence of kieserite was due to opposed relative humidity. The presence of a specific MgSO4.nH2O phase in ancient evaporite deposits can be used as a paleoclimatic indicator. Evaluation of surface water quality for agricultural purposes shows that some irrigation waters and all drainage waters have high salinities and therefore cannot be used for irrigation. Waters from irrigation canals used as a drinking water supply show higher concentrations of Al and suffer from high levels of total coliform (TC), fecal coliform (FC) and fecal streptococcus (FS). These waters cannot be used for drinking or agricultural purposes without treatment, because of their high health risk. Therefore it is crucial that environmental protection agencies and the media increase public awareness of this issue, especially in rural areas.
Resumo:
Climate change is one of the biggest challenges faced by this generation. Despite being the single most important environmental challenge facing the planet and despite over two decades of international climate negotiations, global greenhouse gas (GHG) emissions continue to rise. By the middle of this century, GHGs must be reduced by as much as 40-70% if dangerous climate change is to be avoided. In the Kyoto Protocol no quantitative emission limitation and reduction commitments were placed on the developing countries. For the planning of the future commitments period and possible participation of developing countries, information of the functioning of the energy systems, CO2 emissions development in different sectors, energy use and technological development in developing countries is essential. In addition to the per capita emissions, the efficiency of the energy system in relation to GHG emissions is crucial for the decision of future long-term burden sharing between countries. Country’s future development of CO2 emissions can be defined by the estimated CO2 intensity of the future and the estimated GDP growth. The changes in CO2 intensity depend on several factors, but generally developed countries’ intensity has been increasing in the industrialization phase and decreasing when their economy shifts more towards the system dominated by the service sector. The level of the CO2 intensity depends by a large extent on the production structure and the energy sources that are used. Currently one of the most urgent issues regarding global climate change is to decide the future of the Kyoto Protocol. Negotiations on this topic have already been initiated, with the aim of being finalised by the 2015. This thesis provides insights into the various approaches that can be used to characterise the concept of comparable efforts for developing countries in a future international climate agreement. The thesis examines the post-Kyoto burden sharing questions for developing countries using the contraction and convergence model, which is one approach that has been proposed to allocate commitments regarding future GHG emissions mitigation. This new approach is a practical tool for the evaluation of the Kyoto climate policy process and global climate change negotiations from the perspective of the developing countries.
Resumo:
In this thesis the consumers’ expected motives and barriers for engaging in collaborative consumption in Finland are studied. The phenomenon is observed through the lens of consumer theory and it is connected to the context using Hofstede’s 6-D model. The phenomenon is new and there are almost no recorded results in the background research, and when considering the limitations of this study, there are no results at all. Therefore, combining different kinds of literature, as well as taking along consumer theory and Hofstede’s model that explains cultural factors, it was possible to compile a comprehensive general view of the present state of the phenomenon. The actual study was conducted using qualitative methods and the solution was sought collecting data from six in-depth interviews with interviewees having experience from using, or offering resources, or both. According to the results, the primary motive in all modes of consumption was economic. Anti-materialism, anti-consumption, and expanding lifestyle were another a bit more general motives. Perceived barriers were, especially as a new result, the amount of trouble one has to see and in single modes, a lack of trust, the used platform and too expensive prices.