824 resultados para decentralised data fusion framework
Resumo:
The objective of this Master’s Thesis is to find out best practices for IT service management integration. Integration in this context means process integration between an IT organization and an integration partner. For observing the objective, two different perspectives are assigned: process and technology. The thesis consists of theory, framework, implementation, and analysis parts. The first part introduces common methodology of IT service management and enterprise integration. The second part presents an integration framework for ITSM integration. The third part illustrates how the framework is used and the last part analyses the framework. The major results of this thesis were the framework architecture, the framework tools, the implementation model, the testing model, and the deployment model for ITSM integration. As a fundamental best practice, the framework contained a four-division structure between architecture, process, data, and technology. This architecture provides a baseline for ITSM integration design, implementation and testing.
Resumo:
This study discusses the importance of diasporas’ knowledge with regard to the national competitive advantage of Finland. The purpose of this study is to suggest an interaction framework, which illustrates how diasporas can benefit the host country via intentional knowledge spillovers, with two sub-objectives: to seek which features are crucial for productive interaction between a host government and diasporas, and to scrutinize the modes of interaction currently effective in Finland. The theoretical background of the study consists of literature relating to the concepts of diaspora and knowledge. The empirical research conducted for this study is based on expert interviews. The interview data was collected between September and November 2013. Eight interviews were made; five with representatives of expert organizations, and three with immigrants. Thematic analysis was used to categorize and interpret the interview data. In addition, thematic networks were built to act as a basis of analysis. This study finds that knowledge, especially new combinations of knowledge, is a significant input in innovation. Innovation is found to be the basis of national competitive advantage. Thus the means through which knowledge is transferred are of key importance. Diasporas are found a good source of new knowledge, and thus may aid the innovative process. Host country stance and policy are found to have a major impact on the ability of the host country to benefit from diasporas’ knowledge. As a host country, this study finds Finland to have a very fragmented strategy field and a prejudiced attitude, which currently make it difficult to utilize the potential of diasporas. The interaction framework based on these findings suggests ways in which Finland can improve its national competitive advantage through acquiring the innovative potential of diasporas. Strategy revision and increased promotion are discussed as means towards improved interaction. In addition, the importance of learning is emphasized. The findings of this study enhance understanding of the relationship between the concepts of diaspora and knowledge. In addition, this study ties the relationship to economic benefit. Future research is, however, necessary in order to fully understand the meaning of the relationship, as well as to increase understanding of the generalizability of the interaction framework.
Resumo:
This study is a qualitative action research by its nature with elements of personal design in the form of a tangible model implementation framework construction. Utilized empirical data has been gathered via two questionnaires in relation to the arranged four workshop events with twelve individual participants. Five of them represented maintenance customers, three maintenance service providers and four equipment providers respectively. Further, there are two main research objectives in proportion to the two complementary focusing areas of this thesis. Firstly, the value-based life-cycle model, which first version has already been developed prior to this thesis, requires updating in order to increase its real-life applicability as an inter-firm decision-making tool in industrial maintenance. This first research objective is fulfilled by improving appearance, intelligibility and usability of the above-mentioned model. In addition, certain new features are also added. The workshop participants from the collaborating companies were reasonably pleased with made changes, although further attention will be required in future on the model’s intelligibility in particular as main results, charts and values were all reckoned as slightly hard to understand. Moreover, upgraded model’s appearance and added new features satisfied them the most. Secondly and more importantly, the premises of the model’s possible inter-firm implementation process need to be considered. This second research objective is delivered in two consecutive steps. At first, a bipartite open-books supported implementation framework is created and its different characteristics discussed in theory. Afterwards, the prerequisites and the pitfalls of increasing inter-organizational information transparency are studied in empirical context. One of the main findings was that the organizations are not yet prepared for network-wide information disclosure as dyadic collaboration was favored instead. However, they would be willing to share information bilaterally at least. Another major result was that the present state of companies’ cost accounting systems will definitely need implementation-wise enhancing in future since accurate and sufficiently detailed maintenance data is not available. Further, it will also be crucial to create supporting and mutually agreed network infrastructure. There are hardly any collaborative models, methods or tools currently in usage. Lastly, the essential questions about mutual trust and predominant purchasing strategies are cooperation-wise important. If inter-organizational activities are expanded, a more relational approach should be favored in this regard. Mutual trust was also recognized as a significant cooperation factor, but it is hard to measure in reality.
Resumo:
Actually, the term innovation seems to be one of the most used in any kind of business practices. However, in order to get value from it, companies need to define a systematic and structured way to manage innovation. This process can be difficult and very risky since it is associated with the development of firm´s capabilities which involves human and technical challenges according to the context of a firm. Additionally, it seems not to exist a magic formula to manage innovation and what may work in a company may not work in another, even though in the same type of industry. In this sense, the purpose of this research is to identify how the oil and gas companies can manage innovation and what are the main elements, their interrelations and structure, required for managing innovation effectively in this critical sector for the world economy. The study follows a holistic single case study in a National Oil Company (NOC) of a developing country to explore how innovation performs in the industry, what are the main elements regarding innovation management and their interactions according to the nature of the industry. Contributory literature and qualitative data from the case study company (with the use of non-standardized interviews) is collected and analyzed. The research confirms the relevance and importance of the definition and implementation of an innovation framework in order to ensure the generation of value and organize as well as guide the efforts in innovation done by a firm. In this way based on the theoretical background, research´s findings, and in the company´s innovation environment and conditions, a framework for managing innovation at the case study company is suggested. This study is one of the few, if not only one, that has reviewed the way as oil and gas companies manage innovation and its practical implementation in a company from a developing country. Both researchers and practitioners will get a photograph of understanding innovation management in the oil and gas industry and its growing necessity in the business world. Some issues have been highlighted, so that future study can be focused in those directions. In fact, even though research on innovation management has significantly grown, there are still many issues that need to be addressed to get insight about managing innovation in various contexts and industries. Studies are mostly performed in the context of large firms and in developed countries, so then research in the context of developing countries is still almost an untouched area, especially in the oil and gas industry. Finally, from the research it seems crucial to explore the effect of some innovation-related variables such as: open innovation in third world economies and in state-own companies; the impact of mergers and acquisitions in innovation performance in oil and gas companies; value measurement in the first stages of the innovation process; and, development of innovation capabilities in companies from developing nations.
Resumo:
Outlier detection is an important form of data analysis because outliers in several cases contain the interesting and important pieces of information. In the recent years, many different outlier detection algorithms have been devised for finding different kinds of outliers in varying contexts and environments. Some effort has been put to study how to effectively combine different outlier detection methods. The combination of outlier detection algorithms as an ensemble was studied in this thesis by designing a modular framework for outlier detection, which combines arbitrary outlier detection techniques. This work resulted in an example implementation of the framework. Outlier detection capability of the ensemble method was validated using datasets and methods found in outlier detection research. The framework achieved better results than the individual outlier algorithms. Future research includes how to handle large datasets effectively and the possibilities for real-time outlier monitoring.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
The objective of the present study was to establish a method for quantitative analysis of von Willebrand factor (vWF) multimeric composition using a mathematical framework based on curve fitting. Plasma vWF multimers from 15 healthy subjects and 13 patients with advanced pulmonary vascular disease were analyzed by Western immunoblotting followed by luminography. Quantitative analysis of luminographs was carried out by calculating the relative densities of low, intermediate and high molecular weight fractions using laser densitometry. For each densitometric peak (representing a given fraction of vWF multimers) a mean area value was obtained using data from all group subjects (patients and normal individuals) and plotted against the distance between the peak and IgM (950 kDa). Curves were constructed for each group using nonlinear fitting. Results indicated that highly accurate curves could be obtained for healthy controls and patients, with respective coefficients of determination (r²) of 0.9898 and 0.9778. Differences were observed between patients and normal subjects regarding curve shape, coefficients and the region of highest protein concentration. We conclude that the method provides accurate quantitative information on the composition of vWF multimers and may be useful for comparisons between groups and possibly treatments.
Resumo:
Enveloped viruses always gain entry into the cytoplasm by fusion of their lipid envelope with a cell membrane. Some enveloped viruses fuse directly with the host cell plasma membrane after virus binding to the cell receptor. Other enveloped viruses enter the cells by the endocytic pathway, and fusion depends on the acidification of the endosomal compartment. In both cases, virus-induced membrane fusion is triggered by conformational changes in viral envelope glycoproteins. Two different classes of viral fusion proteins have been described on the basis of their molecular architecture. Several structural data permitted the elucidation of the mechanisms of membrane fusion mediated by class I and class II fusion proteins. In this article, we review a number of results obtained by our laboratory and by others that suggest that the mechanisms involved in rhabdovirus fusion are different from those used by the two well-studied classes of viral glycoproteins. We focus our discussion on the electrostatic nature of virus binding and interaction with membranes, especially through phosphatidylserine, and on the reversibility of the conformational changes of the rhabdovirus glycoprotein involved in fusion. Taken together, these data suggest the existence of a third class of fusion proteins and support the idea that new insights should emerge from studies of membrane fusion mediated by the G protein of rhabdoviruses. In particular, the elucidation of the three-dimensional structure of the G protein or even of the fusion peptide at different pH's might provide valuable information for understanding the fusion mechanism of this new class of fusion proteins.
Resumo:
This master’s thesis was made in order to gain answers to the question of how the integration of the marketing communications and the decision making related to it in a geographically dispersed service organization could be improved in a situation where an organization has gone through a merger. The effects of the organizational design dimensions towards the integration of the marketing communications and the decision making related to it was the main focus. A case study as a research strategy offered a perfect frames for an exploratory study and the data collection was conducted by semi-structured interviews and observing. The main finding proved that from the chosen design dimensions, decentralization, coordination and power, could be found specific factors that in a geographically dispersed organization are affecting the integration of the marketing communications negatively. The effects can be seen mostly in the decision making processes, roles and in the division of responsibility, which are affecting the other dimensions and by this, the integration. In a post-merger situation, the coordination dimension and especially the information asymmetry and the information flow seem to have a largest affect towards the integration of the marketing communications. An asymmetric information distribution with the lack of business and marketing education resulted in low self-assurance and at the end in fragmented management and to the inability to set targets and make independent decisions. As conclusions it can be stated, that with the organizational design dimensions can the effects of a merger towards the integration process of the marketing communications to be evaluated.
Resumo:
This research is looking to find out what benefits employees expect the organization of data governance gains for an organization and how it benefits implementing automated marketing capabilities. Quality and usability of the data are crucial for organizations to meet various business needs. Organizations have more data and technology available what can be utilized for example in automated marketing. Data governance addresses the organization of decision rights and accountabilities for the management of an organization’s data assets. With automated marketing it is meant sending a right message, to a right person, at a right time, automatically. The research is a single case study conducted in Finnish ICT-company. The case company was starting to organize data governance and implementing automated marketing capabilities at the time of the research. Empirical material is interviews of the employees of the case company. Content analysis is used to interpret the interviews in order to find the answers to the research questions. Theoretical framework of the research is derived from the morphology of data governance. Findings of the research indicate that the employees expect the organization of data governance among others to improve customer experience, to improve sales, to provide abilities to identify individual customer’s life-situation, ensure that the handling of the data is according to the regulations and improve operational efficiency. The organization of data governance is expected to solve problems in customer data quality that are currently hindering implementation of automated marketing capabilities.
Resumo:
This thesis presented the overview of Open Data research area, quantity of evidence and establishes the research evidence based on the Systematic Mapping Study (SMS). There are 621 such publications were identified published between years 2005 and 2014, but only 243 were selected in the review process. This thesis highlights the implications of Open Data principals’ proliferation in the emerging era of the accessibility, reusability and sustainability of data transparency. The findings of mapping study are described in quantitative and qualitative measurement based on the organization affiliation, countries, year of publications, research method, star rating and units of analysis identified. Furthermore, units of analysis were categorized by development lifecycle, linked open data, type of data, technical platforms, organizations, ontology and semantic, adoption and awareness, intermediaries, security and privacy and supply of data which are important component to provide a quality open data applications and services. The results of the mapping study help the organizations (such as academia, government and industries), re-searchers and software developers to understand the existing trend of open data, latest research development and the demand of future research. In addition, the proposed conceptual framework of Open Data research can be adopted and expanded to strengthen and improved current open data applications.
Resumo:
This thesis introduces heat demand forecasting models which are generated by using data mining algorithms. The forecast spans one full day and this forecast can be used in regulating heat consumption of buildings. For training the data mining models, two years of heat consumption data from a case building and weather measurement data from Finnish Meteorological Institute are used. The thesis utilizes Microsoft SQL Server Analysis Services data mining tools in generating the data mining models and CRISP-DM process framework to implement the research. Results show that the built models can predict heat demand at best with mean average percentage errors of 3.8% for 24-h profile and 5.9% for full day. A deployment model for integrating the generated data mining models into an existing building energy management system is also discussed.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling