16 resultados para semi-Markov decision process

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal Punishment of Economic Crime: A Study on Bankruptcy Crime This thesis researches whether the punishment practise of bankruptcy crimes is optimal in light of Gary S. Becker’s theory of optimal punishment. According to Becker, a punishment is optimal if it eliminates the expected utility of the crime for the offender and - on the other hand - minimizes the cost of the crime to society. The decision process of the offender is observed through their expected utility of the crime. The expected utility is calculated based on the offender's probability of getting caught, the cost of getting caught and the profit from the crime. All objects including the punishment are measured in cash. The cost of crimes to the society is observed defining the disutility caused by the crime to the society. The disutility is calculated based on the cost of crime prevention, crime damages, punishment execution and the probability of getting caught. If the goal is to minimize the crime profits, the punishments of bankruptcy crimes are not optimal. If the debtors would decide whether or not to commit the crime solely based on economical consideration, the crime rate would be multiple times higher than the current rate is. The prospective offender relies heavily on non-economic aspects in their decision. Most probably social pressure and personal commitment to oblige the laws are major factors in the prospective criminal’s decision-making. The function developed by Becker measuring the cost to society was not useful in the measurement of the optimality of a punishment. The premise of the function that the costs of the society correlate to the costs for the offender from the punishment proves to be unrealistic in observation of the bankruptcy crimes. However, it was observed that majority of the cost of crime for the society are caused by the crime damages. This finding supports the preventive criminal politics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The dissertation examines Roman provincial administration and the phenomenon of territorial reorganisations of provinces during the Imperial period with special emphasis on the provinces of Arabia and Palaestina during the Later Roman period, i.e., from Diocletian (r. 284 305) to the accession of Phocas (602), in the light of imperial decision-making. Provinces were the basic unit of Roman rule, for centuries the only level of administration that existed between the emperor and the cities of the Empire. The significance of the territorial reorganisations that the provinces were subjected to during the Imperial period is thus of special interest. The approach to the phenomenon is threefold: firstly, attention is paid to the nature and constraints of the Roman system of provincial administration. Secondly, the phenomenon of territorial reorganisations is analysed on the macro-scale, and thirdly, a case study concerning the reorganisations of the provinces of Arabia and Palaestina is conducted. The study of the mechanisms of decision-making provides a foundation through which the collected data of all known major territorial reorganisations is interpreted. The data concerning reorganisations is also subjected to qualitative comparative analysis that provides a new perspective to the data in the form of statistical analysis that is sensitive to the complexities of individual cases. This analysis of imperial decision-making is based on a timeframe stretching from Augustus (r. 30 BC AD 14) to the accession of Phocas (602). The study identifies five distinct phases in the use of territorial reorganisations of the provinces. From Diocletian s reign there is a clear normative change that made territorial reorganisations a regular tool of administration for the decision-making elite for addressing a wide variety of qualitatively different concerns. From the beginning of the fifth century the use of territorial reorganisations rapidly diminishes. The two primary reasons for the decline in the use of reorganisations were the solidification of ecclesiastical power and interests connected to the extent of provinces, and the decline of the dioceses. The case study of Palaestina and Arabia identifies seven different territorial reorganisations from Diocletian to Phocas. Their existence not only testifies to wider imperial policies, but also shows sensitivity to local conditions and corresponds with the general picture of provincial reorganisations. The territorial reorganisations of the provinces reflect the proactive control of the Roman decision-making elite. The importance of reorganisations should be recognised more clearly as part of the normal imperial administration of the provinces and especially reflecting the functioning of dioceses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Socio-economic and demographic changes among family forest owners and demands for versatile forestry decision aid motivated this study, which sought grounds for owner-driven forest planning. Finnish family forest owners’ forest-related decision making was analyzed in two interview-based qualitative studies, the main findings of which were surveyed quantitatively. Thereafter, a scheme for adaptively mixing methods in individually tailored decision support processes was constructed. The first study assessed owners’ decision-making strategies by examining varying levels of the sharing of decision-making power and the desire to learn. Five decision-making modes – trusting, learning, managing, pondering, and decisive – were discerned and discussed against conformable decision-aid approaches. The second study conceptualized smooth communication and assessed emotional, practical, and institutional boosters of and barriers to such smoothness in communicative decision support. The results emphasize the roles of trust, comprehension, and contextual services in owners’ communicative decision making. In the third study, a questionnaire tool to measure owners’ attitudes towards communicative planning was constructed by using trusting, learning, and decisive dimensions. Through a multivariate analysis of survey data, three owner groups were identified as fusions of the original decision-making modes: trusting learners (53%), decisive learners (27%), and decisive managers (20%). Differently weighted communicative services are recommended for these compound wishes. The findings of the studies above were synthesized in a form of adaptive decision analysis (ADA), which allows and encourages the decision-maker (owner) to make deliberate choices concerning the phases of a decision aid (planning) process. The ADA model relies on adaptability and feedback management, which foster smooth communication with the owner and (inter-)organizational learning of the planning institution(s). The summarized results indicate that recognizing the communication-related amenity values of family forest owners may be crucial in developing planning and extension services. It is therefore recommended that owners, root-level planners, consultation professionals, and pragmatic researchers collaboratively continue to seek stable change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information visualization is a process of constructing a visual presentation of abstract quantitative data. The characteristics of visual perception enable humans to recognize patterns, trends and anomalies inherent in the data with little effort in a visual display. Such properties of the data are likely to be missed in a purely text-based presentation. Visualizations are therefore widely used in contemporary business decision support systems. Visual user interfaces called dashboards are tools for reporting the status of a company and its business environment to facilitate business intelligence (BI) and performance management activities. In this study, we examine the research on the principles of human visual perception and information visualization as well as the application of visualization in a business decision support system. A review of current BI software products reveals that the visualizations included in them are often quite ineffective in communicating important information. Based on the principles of visual perception and information visualization, we summarize a set of design guidelines for creating effective visual reporting interfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge-sharing in a teamwork The study examines the link between knowledge-sharing that takes place in a team and the dimensions and objectives of the team s activities. The question the study poses is: How does knowledge-sharing in a team relate to the team s activities? The exchange of knowledge is examined using knowledge-sharing networks and the conversion model, which describes the process of knowledge formation. The answer to the question is sought through four empirical articles describing the activities of a team from the viewpoint of quality, fairness, power related to knowledge management, and performance. One of the articles used in the study describes the role of networks in work life more generally. It attempts to shed light on the manner in which team-related networks operate as part of a more extensive structure of organizational networks. Finland is one of the most eager users of teamwork, if numbers are used as a yardstick. About half of all Finnish wage earners worked in teams in 2009, and comparisons show that the use of teams in Finland is above the EU average. This study focuses on so-called semi-autonomous teams, which carry out permanent work tasks. In such teams, tasks are interdependent, and teams are jointly responsible for ensuring that the work is done. Team members may also, at least to some extent, agree between themselves on how the tasks are carried out and are able to take part in the decision-making process. Such teamwork makes knowledge-sharing an important element for the team s activities. Knowledge and knowledge-sharing have become a major resource, allowing organizations to operate and even compete in today s increasingly competitive markets. A single team or a single organization cannot, however, possess all the knowledge required for carrying out the tasks assigned to it. Although it is difficult to copy the knowledge generated in an organization, it is important to share the knowledge within and between organizations. External links supply teams and organizations with important knowledge that allows them to keep their operations up-to-date and their structures well-functioning. In fact, knowledge provides teams and organizations with an intangible resource that improves their capacity to interact with their environment and to adjust to it. For this reason, it is important to examine both the internal and external knowledge-sharing taking place in a team. The findings of the study show that in terms of quality, fairness, performance and the knowledge management issues concerning a team, its social network structure is both internally and externally connected with its activities. A team structure that is internally coherent and at the same time open to external contacts, is, with certain restrictions, connected with the quality, fairness, and performance of the team. The restrictions concern differences between procedural and interactional justice, public and private sectors, and the team leaders and ordinary team members. The role of the team leader is closely connected with the management of networks that are considered valuable. The results of the study indicate that teamwork is supervisor-dominated. Thus, teamwork does not substantially strengthen the influence of individual employees as players in knowledge-transfer networks. However, ordinary team members possess important peer contacts inside the organization. Teamwork clearly allows employees to interact in a democratic manner, and here the transfer of tacit knowledge plays an important role. Keywords: teamwork, knowledge-sharing, social networks, organization

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerosols impact the planet and our daily lives through various effects, perhaps most notably those related to their climatic and health-related consequences. While there are several primary particle sources, secondary new particle formation from precursor vapors is also known to be a frequent, global phenomenon. Nevertheless, the formation mechanism of new particles, as well as the vapors participating in the process, remain a mystery. This thesis consists of studies on new particle formation specifically from the point of view of numerical modeling. A dependence of formation rate of 3 nm particles on the sulphuric acid concentration to the power of 1-2 has been observed. This suggests nucleation mechanism to be of first or second order with respect to the sulphuric acid concentration, in other words the mechanisms based on activation or kinetic collision of clusters. However, model studies have had difficulties in replicating the small exponents observed in nature. The work done in this thesis indicates that the exponents may be lowered by the participation of a co-condensing (and potentially nucleating) low-volatility organic vapor, or by increasing the assumed size of the critical clusters. On the other hand, the presented new and more accurate method for determining the exponent indicates high diurnal variability. Additionally, these studies included several semi-empirical nucleation rate parameterizations as well as a detailed investigation of the analysis used to determine the apparent particle formation rate. Due to their high proportion of the earth's surface area, oceans could potentially prove to be climatically significant sources of secondary particles. In the lack of marine observation data, new particle formation events in a coastal region were parameterized and studied. Since the formation mechanism is believed to be similar, the new parameterization was applied in a marine scenario. The work showed that marine CCN production is feasible in the presence of additional vapors contributing to particle growth. Finally, a new method to estimate concentrations of condensing organics was developed. The algorithm utilizes a Markov chain Monte Carlo method to determine the required combination of vapor concentrations by comparing a measured particle size distribution with one from an aerosol dynamics process model. The evaluation indicated excellent agreement against model data, and initial results with field data appear sound as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research is to draw up a clear construction of an anticipatory communicative decision-making process and a successful implementation of a Bayesian application that can be used as an anticipatory communicative decision-making support system. This study is a decision-oriented and constructive research project, and it includes examples of simulated situations. As a basis for further methodological discussion about different approaches to management research, in this research, a decision-oriented approach is used, which is based on mathematics and logic, and it is intended to develop problem solving methods. The approach is theoretical and characteristic of normative management science research. Also, the approach of this study is constructive. An essential part of the constructive approach is to tie the problem to its solution with theoretical knowledge. Firstly, the basic definitions and behaviours of an anticipatory management and managerial communication are provided. These descriptions include discussions of the research environment and formed management processes. These issues define and explain the background to further research. Secondly, it is processed to managerial communication and anticipatory decision-making based on preparation, problem solution, and solution search, which are also related to risk management analysis. After that, a solution to the decision-making support application is formed, using four different Bayesian methods, as follows: the Bayesian network, the influence diagram, the qualitative probabilistic network, and the time critical dynamic network. The purpose of the discussion is not to discuss different theories but to explain the theories which are being implemented. Finally, an application of Bayesian networks to the research problem is presented. The usefulness of the prepared model in examining a problem and the represented results of research is shown. The theoretical contribution includes definitions and a model of anticipatory decision-making. The main theoretical contribution of this study has been to develop a process for anticipatory decision-making that includes management with communication, problem-solving, and the improvement of knowledge. The practical contribution includes a Bayesian Decision Support Model, which is based on Bayesian influenced diagrams. The main contributions of this research are two developed processes, one for anticipatory decision-making, and the other to produce a model of a Bayesian network for anticipatory decision-making. In summary, this research contributes to decision-making support by being one of the few publicly available academic descriptions of the anticipatory decision support system, by representing a Bayesian model that is grounded on firm theoretical discussion, by publishing algorithms suitable for decision-making support, and by defining the idea of anticipatory decision-making for a parallel version. Finally, according to the results of research, an analysis of anticipatory management for planned decision-making is presented, which is based on observation of environment, analysis of weak signals, and alternatives to creative problem solving and communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a manifold learning method is applied to the problem of WLAN positioning and automatic radio map creation. Due to the nature of WLAN signal strength measurements, a signal map created from raw measurements results in non-linear distance relations between measurement points. These signal strength vectors reside in a high-dimensioned coordinate system. With the help of the so called Isomap-algorithm the dimensionality of this map can be reduced, and thus more easily processed. By embedding position-labeled strategic key points, we can automatically adjust the mapping to match the surveyed environment. The environment is thus learned in a semi-supervised way; gathering training points and embedding them in a two-dimensional manifold gives us a rough mapping of the measured environment. After a calibration phase, where the labeled key points in the training data are used to associate coordinates in the manifold representation with geographical locations, we can perform positioning using the adjusted map. This can be achieved through a traditional supervised learning process, which in our case is a simple nearest neighbors matching of a sampled signal strength vector. We deployed this system in two locations in the Kumpula campus in Helsinki, Finland. Results indicate that positioning based on the learned radio map can achieve good accuracy, especially in hallways or other areas in the environment where the WLAN signal is constrained by obstacles such as walls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Master’s thesis is qualitative research based on interviews of 15 Chinese immigrants to Finland in order to provide a sociological perspective of the migration experience through the eyes of Chinese immigrants in the Finnish social welfare context. This research is mainly focused upon four crucial aspects of life in the settlement process: housing, employment, access to health care and child care. Inspired by Allardt’s theoretical framework ‘Having, Loving and Being’, social relationships and individual satisfaction are examined in the case of Chinese interviewees dealing with the four life aspects. Finland was not perceived as an attractive migration destination for most Chinese interviewees in the beginning. However, with longer residence in Finland, the Finnish social welfare system gradually became a crucial appealing factor in their permanent settlement in Finland. And meanwhile, social responsibility of attending their old parents in China, strong feelings of being isolated in Finland, and insufficient integration into the Finnish society were influential factors for their decision of returning to China. Social relationships with personal friends, migration brokers, schools, employers and family relatives had great influences in the four life aspects of Chinese immigrants in Finland. The social relationship with the Finnish social welfare sector is supportive to Chinese immigrants, but Chinese immigrants do not heavily rely on Finnish social protection. The housing conditions were greatly improved over time while the upward mobility in the Finnish labour market was not significant among Chinese immigrants. All Chinese immigrants were satisfied with their current housing by the time I interviewed them while most of them had subjective feelings of being alienated in the Finnish labour market, which seriously prevented them from integrating into the Finnish society. In general, Chinese immigrants were satisfied with the low cost of accessing the Finnish public health care services and affordable Finnish child day care services and financial subsidies for children from the Finnish social welfare sector. This research also suggests that employment is the central basis in well-being. Support from the Finnish social welfare sector can improve the satisfaction levels among immigrants, especially when it mitigates the effects of low-paid employment. As well, my empirical study of Chinese immigrants in Finland shows that Having (needs for materials), Loving (needs for social relations) and Being (needs for social integration) are all involved in the four concrete aspects (housing, employment, access to health care and child care).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although previous research has recognised adaptation as a central aspect in relationships, the adaptation of the sales process to the buying process has not been studied. Furthermore, the linking of relationship orientation as mindset with adaptation as a strategy and forming the means has not been elaborated upon in previous research. Adaptation in the context of relationships has mostly been studied in relationship marketing. In sales and sales management research, adaptation has been studied with reference to personal selling. This study focuses on adaptation of the sales process to strategically match it to the buyer’s mindset and buying process. The purpose of this study is to develop a framework for strategic adaptation of the seller’s sales process to match the buyer’s buying process in a business-to-business context to make sales processes more relationship oriented. In order to arrive at a holistic view of adaptation of the sales process during relationship initiation, both the seller and buyer are included in an extensive case analysed in the study. However, the selected perspective is primarily that of the seller, and the level focused on is that of the sales process. The epistemological perspective adopted is constructivism. The study is a qualitative one applying a retrospective case study, where the main sources of information are in-depth semi-structured interviews with key informants representing the counterparts at the seller and the buyer in the software development and telecommunications industries. The main theoretical contributions of this research involve targeting a new area in the crossroads of relationship marketing, sales and sales management, and buying and purchasing by studying adaptation in a business-to-business context from a new perspective. Primarily, this study contributes to research in sales and sales management with reference to relationship orientation and strategic sales process adaptation. This research fills three research gaps. Firstly, linking the relationship orientation mindset with adaptation as strategy. Secondly, extending adaptation in sales from adaptation in selling to strategic adaptation of the sales process. Thirdly, extending adaptation to include facilitation of adaptation. The approach applied in the study, systematic combining, is characterised by continuously moving back and forth between theory and empirical data. The framework that emerges, in which linking mindset with strategy with mindset and means forms a central aspect, includes three layers: purchasing portfolio, seller-buyer relationship orientation, and strategic sales process adaptation. Linking the three layers enables an analysis of where sales process adaptation can make a contribution. Furthermore, implications for managerial use are demonstrated, for example how sellers can avoid the ‘trap’ of ad-hoc adaptation. This includes involving the company, embracing the buyer’s purchasing portfolio, understanding the current position that the seller has in this portfolio, and possibly educating the buyer about advantages of adopting a relationship-oriented approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the process of producing interactivity in a converged media environment. The study asks whether more media convergence equals more interactivity. The research object is approached through semi-structured interviews of prominent decision makers within the Finnish media. The main focus of the study are the three big ones of the traditional media, radio, television and the printing press, and their ability to adapt to the changing environment. The study develops theoretical models for the analysis of interactive features and convergence. Case-studies are formed from the interview data and they are evaluated against the models. As a result the cases arc plotted and compared on a four-fold table. The cases are Radio Rock, NRJ, Biu Brother, Television Chat, Olivia and Sanoma News. It is found out that the theoretical models can accurately forecast the results of the case studies. The models are also able to distinguish different aspects of both interactivity and convergence so that a case, which at a first glance seems not to be very interactive is in the end found out to receive second highest scores on the analysis. The highest scores are received by Big Brother and Sanoma News. Through the theory and the analysis of the research data it is found out that the concepts of interactivity and convergence arc intimately intertwined and very hard in many cases to separate from each other. Hence the answer to the main question of this study is yes, convergence does promote interactivity and audience participation. The main theoretical background for the analysis of interactivity follows the work of Came Fleeter, Spiro Kiousis and Sally McMillan. Heeler's six-dimensional definition of interactivity is used as the basis for operationalizing interactivity. The actor-network theory is used as the main theoretical framework to analyze convergence. The definition and operationalization of the actor-network theory into a model of convergence follows the work of Michel Callon. Bruno Latour and especially John Law and Felix Stalder.