15 resultados para Digital elevation model
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
Additive manufacturing (shortened as AM), or more commonly 3D printing, consists of wide variety of different modern manufacturing technologies. AM is based on direct printing of a digital 3D model to a final product which is fabricated adding material layer by layer. This is from where term additive manufacturing has its origin. It is not only material what is added, but it is also value, properties etc. which are added. AM enables production of different and even better products compared to conventional manufacturing technologies. An estimation of potential of additive manufacturing can be gathered by considering the potential of laser cutting, which is one of the most widely used modern manufacturing technologies. This technique has been used over 40 years, and whole market around this technology is at the moment c. four billion euros and yearly growth is around 10 %. One factor affecting this success of laser cutting is that laser cutting enables radical improvements to products made of flat sheet. AM and 3D printing will do the same for three dimensional parts. Laser devices, which are at the moment used in 3D printing, are globally at the moment only around 1% of all laser devices used in any fabrication technology, so even with a cautious estimate the potential growth of at least 100 % is coming in next few years. Role of education is very important, when this kind of modern technology is industrially implemented. When both generation entering to work life and also generation who has been a while in work life understands new technology, its potential and limitations, this is the point when also product design can be rethought Potential of product design is driving force for wide use of additive manufacturing and 3D printing. Utilization of additive manufacturing and 3D printing is also opportunity for Finland and Finnish industry. This technology can save Finnish manufacturing industry. This technique has stron potential, as Finland has traditionally strong industrial know-how and good ICT knowledge.
Resumo:
Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Peer-to-Peer (P2P) technology has revolutionized file exchange activities besides enhancing processing power distribution. As such, this technology which is nowadays made freely available to all internet users also imposes a threat as it enables the illegal distribution of copyrighted digital work. P2P technology continuously evolves in a greater pace than copyright legislation, leading to compatibility gaps between the applicability of copyright law and the illicit file sharing and downloading. Such issues give high incentives to consumers to practise piracy using P2P systems with a low perception of risk towards prosecution, leading to substantial losses for copyright owners. This study focuses on developing insights for content owners on consumer behaviour towards piracy in Finland, where quantitative analyses are assessed using a data set based on a survey conducted by the Helsinki Institute for IT. The research approach investigates the significance of three fundamental areas in relation to evaluate consumer behaviour as: environmental-related factors, innovation-related factors and consumer-related. each of these are integrates concepts derived in previous theoretical models such as the technology acceptance model, theory of reasoned action, theory of planned behaviour, the issue-risk-judgement model and the Hunt & Vitell’s model.
Resumo:
This study was conducted in order to learn how companies’ revenue models will be transformed due to the digitalisation of its products and processes. Because there is still only a limited number of researches focusing solely on revenue models, and particularly on the revenue model change caused by the changes at the business environment, the topic was initially approached through the business model concept, which organises the different value creating operations and resources at a company in order to create profitable revenue streams. This was used as the base for constructing the theoretical framework for this study, used to collect and analyse the information. The empirical section is based on a qualitative study approach and multiple-case analysis of companies operating in learning materials publishing industry. Their operations are compared with companies operating in other industries, which have undergone comparable transformation, in order to recognise either similarities or contrasts between the cases. The sources of evidence are a literature review to find the essential dimensions researched earlier, and interviews 29 of managers and executives at 17 organisations representing six industries. Based onto the earlier literature and the empirical findings of this study, the change of the revenue model is linked with the change of the other dimen-sions of the business model. When one dimension will be altered, as well the other should be adjusted accordingly. At the case companies the transformation is observed as the utilisation of several revenue models simultaneously and the revenue creation processes becoming more complex.
Resumo:
The Travel and Tourism field is undergoing changes due to the rapid development of information technology and digital services. Online travel has profoundly changed the way travel and tourism organizations interact with their customers. Mobile technology such as mobile services for pocket devices (e.g. mobile phones) has the potential to take this development even further. Nevertheless, many issues have been highlighted since the early days of mobile services development (e.g. the lack of relevance, ease of use of many services). However, the wide adoption of smartphones and the mobile Internet in many countries as well as the formation of so-called ecosystems between vendors of mobile technology indicate that many of these issues have been overcome. Also when looking at the numbers of downloaded applications related to travel in application stores like Google Play, it seems obvious that mobile travel and tourism services are adopted and used by many individuals. However, as business is expected to start booming in the mobile era, many issues have a tendency to be overlooked. Travelers are generally on the go and thus services that work effectively in mobile settings (e.g. during a trip) are essential. Hence, the individuals’ perceived drivers and barriers to use mobile travel and tourism services in on-site or during trip settings seem particularly valuable to understand; thus this is one primary aim of the thesis. We are, however, also interested in understanding different types of mobile travel service users. Individuals may indeed be very different in their propensity to adopt and use technology based innovations (services). Research is also switching more from investigating issues of mobile service development to understanding individuals’ usage patterns of mobile services. But designing new mobile services may be a complex matter from a service provider perspective. Hence, our secondary aim is to provide insights into drivers and barriers of mobile travel and tourism service development from a holistic business model perspective. To accomplish the research objectives seven different studies have been conducted over a time period from 2002 – 2013. The studies are founded on and contribute to theories within diffusion of innovations, technology acceptance, value creation, user experience and business model development. Several different research methods are utilized: surveys, field and laboratory experiments and action research. The findings suggest that a successful mobile travel and tourism service is a service which supports one or several mobile motives (needs) of individuals such as spontaneous needs, time-critical arrangements, efficiency ambitions, mobility related needs (location features) and entertainment needs. The service could be customized to support travelers’ style of traveling (e.g. organized travel or independent travel) and should be easy to use, especially easy to take into use (access, install and learn) during a trip, without causing security concerns and/or financial risks for the user. In fact, the findings suggest that the most prominent barrier to the use of mobile travel and tourism services during a trip is an individual’s perceived financial cost (entry costs and usage costs). It should, however, be noted that regulations are put in place in the EU regarding data roaming prices between European countries and national telecom operators are starting to see ‘international data subscriptions’ as a sales advantage (e.g. Finnish Sonera provides a data subscription in the Baltic and Nordic region at the same price as in Finland), which will enhance the adoption of mobile travel and tourism services also in international contexts. In order to speed up the adoption rate travel service providers could consider e.g. more local initiatives of free Wi-Fi networks, development of services that can be used, at least to some extent, in an offline mode (do not require costly network access during a trip) and cooperation with telecom operators (e.g. lower usage costs for travelers who use specific mobile services or travel with specific vendors). Furthermore, based on a developed framework for user experience of mobile trip arrangements, the results show that a well-designed mobile site and/or native application, which preferably supports integration with other mobile services, is a must for true mobile presence. In fact, travel service providers who want to build a relationship with their customers need to consider a downloadable native application, but in order to be found through the mobile channel and make contact with potential new customers, a mobile website should be available. Moreover, we have made a first attempt with cluster analysis to identify user categories of mobile services in a travel and tourism context. The following four categories were identified: info-seekers, checkers, bookers and all-rounders. For example “all-rounders”, represented primarily by individuals who use their pocket device for almost any of the investigated mobile travel services, constituted primarily of 23 to 50 year old males with high travel frequency and great online experience. The results also indicate that travel service providers will increasingly become multi-channel providers. To manage multiple online channels, closely integrated and hybrid online platforms for different devices, supporting all steps in a traveler process should be considered. It could be useful for travel service providers to focus more on developing browser-based mobile services (HTML5-solutions) than native applications that work only with specific operating systems and for specific devices. Based on an action research study and utilizing a holistic business model framework called STOF we found that HTML5 as an emerging platform, at least for now, has some limitations regarding the development of the user experience and monetizing the application. In fact, a native application store (e.g. Google Play) may be a key mediator in the adoption of mobile travel and tourism services both from a traveler and a service provider perspective. Moreover, it must be remembered that many device and mobile operating system developers want service providers to specifically create services for their platforms and see native applications as a strategic advantage to sell more devices of a certain kind. The mobile telecom industry has moved into a battle of ecosystems where device makers, developers of operating systems and service developers are to some extent forced to choose their development platforms.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Digital business ecosystems (DBE) are becoming an increasingly popular concept for modelling and building distributed systems in heterogeneous, decentralized and open environments. Information- and communication technology (ICT) enabled business solutions have created an opportunity for automated business relations and transactions. The deployment of ICT in business-to-business (B2B) integration seeks to improve competitiveness by establishing real-time information and offering better information visibility to business ecosystem actors. The products, components and raw material flows in supply chains are traditionally studied in logistics research. In this study, we expand the research to cover the processes parallel to the service and information flows as information logistics integration. In this thesis, we show how better integration and automation of information flows enhance the speed of processes and, thus, provide cost savings and other benefits for organizations. Investments in DBE are intended to add value through business automation and are key decisions in building up information logistics integration. Business solutions that build on automation are important sources of value in networks that promote and support business relations and transactions. Value is created through improved productivity and effectiveness when new, more efficient collaboration methods are discovered and integrated into DBE. Organizations, business networks and collaborations, even with competitors, form DBE in which information logistics integration has a significant role as a value driver. However, traditional economic and computing theories do not focus on digital business ecosystems as a separate form of organization, and they do not provide conceptual frameworks that can be used to explore digital business ecosystems as value drivers—combined internal management and external coordination mechanisms for information logistics integration are not the current practice of a company’s strategic process. In this thesis, we have developed and tested a framework to explore the digital business ecosystems developed and a coordination model for digital business ecosystem integration; moreover, we have analysed the value of information logistics integration. The research is based on a case study and on mixed methods, in which we use the Delphi method and Internetbased tools for idea generation and development. We conducted many interviews with key experts, which we recoded, transcribed and coded to find success factors. Qualitative analyses were based on a Monte Carlo simulation, which sought cost savings, and Real Option Valuation, which sought an optimal investment program for the ecosystem level. This study provides valuable knowledge regarding information logistics integration by utilizing a suitable business process information model for collaboration. An information model is based on the business process scenarios and on detailed transactions for the mapping and automation of product, service and information flows. The research results illustrate the current cap of understanding information logistics integration in a digital business ecosystem. Based on success factors, we were able to illustrate how specific coordination mechanisms related to network management and orchestration could be designed. We also pointed out the potential of information logistics integration in value creation. With the help of global standardization experts, we utilized the design of the core information model for B2B integration. We built this quantitative analysis by using the Monte Carlo-based simulation model and the Real Option Value model. This research covers relevant new research disciplines, such as information logistics integration and digital business ecosystems, in which the current literature needs to be improved. This research was executed by high-level experts and managers responsible for global business network B2B integration. However, the research was dominated by one industry domain, and therefore a more comprehensive exploration should be undertaken to cover a larger population of business sectors. Based on this research, the new quantitative survey could provide new possibilities to examine information logistics integration in digital business ecosystems. The value activities indicate that further studies should continue, especially with regard to the collaboration issues on integration, focusing on a user-centric approach. We should better understand how real-time information supports customer value creation by imbedding the information into the lifetime value of products and services. The aim of this research was to build competitive advantage through B2B integration to support a real-time economy. For practitioners, this research created several tools and concepts to improve value activities, information logistics integration design and management and orchestration models. Based on the results, the companies were able to better understand the formulation of the digital business ecosystem and the importance of joint efforts in collaboration. However, the challenge of incorporating this new knowledge into strategic processes in a multi-stakeholder environment remains. This challenge has been noted, and new projects have been established in pursuit of a real-time economy.
Resumo:
The purpose of this thesis is to explore a different kind of digital content management model and to propose a process in order to manage properly the content on an organization’s website. This process also defines briefly the roles and responsibilities of the different actors implicated. In order to create this process, the thesis has been divided into two parts. First, the theoretical analysis helps to find the two main different content management models, content management adaptation and content management localization model. Each of these models, have been analyzed through a SWOT model in order to identify their particularities and which of them is the best option according to particular organizational objectives. In the empirical part, this thesis has measured the organizational website performance comparing two main data. On one hand, the international website is analyzed in order to identify the results of the content management standardization. On the other hand, content management adaptation, also called content management localization model, is analyzed by looking through the key measure of the Dutch page from the same organization. The resulted output is a process model for localization as well as recommendations on how to proceed when creating a digital content management strategy. However, more research is recommended to provide more comprehensive managerial solutions.
Resumo:
The computer game industry has grown steadily for years, and in revenues it can be compared to the music and film industries. The game industry has been moving to digital distribution. Computer gaming and the concept of business model are discussed among industrial practitioners and the scientific community. The significance of the business model concept has increased in the scientific literature recently, although there is still a lot of discussion going on on the concept. In the thesis, the role of the business model in the computer game industry is studied. Computer game developers, designers, project managers and organization leaders in 11 computer game companies were interviewed. The data was analyzed to identify the important elements of computer game business model, how the business model concept is perceived and how the growth of the organization affects the business model. It was identified that the importance of human capital is crucial to the business. As games are partly a product of creative thinking also innovation and the creative process are highly valued. The same applies to technical skills when performing various activities. Marketing and customer relationships are also considered as key elements in the computer game business model. Financing and partners are important especially for startups, when the organization is dependent on external funding and third party assets. The results of this study provide organizations with improved understanding on how the organization is built and what business model elements are weighted.