964 resultados para Data Warehousing Systems
Resumo:
The bioavailability of metals and their potential for environmental pollution depends not simply on total concentrations, but is to a great extent determined by their chemical form. Consequently, knowledge of aqueous metal species is essential in investigating potential metal toxicity and mobility. The overall aim of this thesis is, thus, to determine the species of major and trace elements and the size distribution among the different forms (e.g. ions, molecules and mineral particles) in selected metal-enriched Boreal river and estuarine systems by utilising filtration techniques and geochemical modelling. On the basis of the spatial physicochemical patterns found, the fractionation and complexation processes of elements (mainly related to input of humic matter and pH-change) were examined. Dissolved (<1 kDa), colloidal (1 kDa-0.45 μm) and particulate (>0.45 μm) size fractions of sulfate, organic carbon (OC) and 44 metals/metalloids were investigated in the extremely acidic Vörå River system and its estuary in W Finland, and in four river systems in SW Finland (Sirppujoki, Laajoki, Mynäjoki and Paimionjoki), largely affected by soil erosion and acid sulfate (AS) soils. In addition, geochemical modelling was used to predict the formation of free ions and complexes in these investigated waters. One of the most important findings of this study is that the very large amounts of metals known to be released from AS soils (including Al, Ca, Cd, Co, Cu, Mg, Mn, Na, Ni, Si, U and the lanthanoids) occur and can prevail mainly in toxic forms throughout acidic river systems; as free ions and/or sulfate-complexes. This has serious effects on the biota and especially dissolved Al is expected to have acute effects on fish and other organisms, but also other potentially toxic dissolved elements (e.g. Cd, Cu, Mn and Ni) can have fatal effects on the biota in these environments. In upstream areas that are generally relatively forested (higher pH and contents of OC) fewer bioavailable elements (including Al, Cu, Ni and U) may be found due to complexation with the more abundantly occurring colloidal OC. In the rivers in SW Finland total metal concentrations were relatively high, but most of the elements occurred largely in a colloidal or particulate form and even elements expected to be very soluble (Ca, K, Mg, Na and Sr) occurred to a large extent in colloidal form. According to geochemical modelling, these patterns may only to a limited extent be explained by in-stream metal complexation/adsorption. Instead there were strong indications that the high metal concentrations and dominant solid fractions were largely caused by erosion of metal bearing phyllosilicates. A strong influence of AS soils, known to exist in the catchment, could be clearly distinguished in the Sirppujoki River as it had very high concentrations of a metal sequence typical of AS soils in a dissolved form (Ba, Br, Ca, Cd, Co, K, Mg, Mn, Na, Ni, Rb and Sr). In the Paimionjoki River, metal concentrations (including Ba, Cs, Fe, Hf, Pb, Rb, Si, Th, Ti, Tl and V; not typical of AS soils in the area) were high, but it was found that the main cause of this was erosion of metal bearing phyllosilicates and thus these metals occurred dominantly in less toxic colloidal and particulate fractions. In the two nearby rivers (Laajoki and Mynäjoki) there was influence of AS soils, but it was largely masked by eroded phyllosilicates. Consequently, rivers draining clay plains sensitive to erosion, like those in SW Finland, have generally high background metal concentrations due to erosion. Thus, relying on only semi-dissolved (<0.45 μm) concentrations obtained in routine monitoring, or geochemical modelling based on such data, can lead to a great overestimation of the water toxicity in this environment. The potentially toxic elements that are of concern in AS soil areas will ultimately be precipitated in the recipient estuary or sea, where the acidic metalrich river water will gradually be diluted/neutralised with brackish seawater. Along such a rising pH gradient Al, Cu and U will precipitate first together with organic matter closest to the river mouth. Manganese is relatively persistent in solution and, thus, precipitates further down the estuary as Mn oxides together with elements such as Ba, Cd, Co, Cu and Ni. Iron oxides, on the contrary, are not important scavengers of metals in the estuary, they are predicted to be associated only with As and PO4.
Resumo:
Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.
Resumo:
The modern society is getting increasingly dependent on software applications. These run on processors, use memory and account for controlling functionalities that are often taken for granted. Typically, applications adjust the functionality in response to a certain context that is provided or derived from the informal environment with various qualities. To rigorously model the dependence of an application on a context, the details of the context are abstracted and the environment is assumed stable and fixed. However, in a context-aware ubiquitous computing environment populated by autonomous agents, a context and its quality parameters may change at any time. This raises the need to derive the current context and its qualities at runtime. It also implies that a context is never certain and may be subjective, issues captured by the context’s quality parameter of experience-based trustworthiness. Given this, the research question of this thesis is: In what logical topology and by what means may context provided by autonomous agents be derived and formally modelled to serve the context-awareness requirements of an application? This research question also stipulates that the context derivation needs to incorporate the quality of the context. In this thesis, we focus on the quality of context parameter of trustworthiness based on experiences having a level of certainty and referral experiences, thus making trustworthiness reputation based. Hence, in this thesis we seek a basis on which to reason and analyse the inherently inaccurate context derived by autonomous agents populating a ubiquitous computing environment in order to formally model context-awareness. More specifically, the contribution of this thesis is threefold: (i) we propose a logical topology of context derivation and a method of calculating its trustworthiness, (ii) we provide a general model for storing experiences and (iii) we formalise the dependence between the logical topology of context derivation and its experience-based trustworthiness. These contributions enable abstraction of a context and its quality parameters to a Boolean decision at runtime that may be formally reasoned with. We employ the Action Systems framework for modelling this. The thesis is a compendium of the author’s scientific papers, which are republished in Part II. Part I introduces the field of research by providing the mending elements for the thesis to be a coherent introduction for addressing the research question. In Part I we also review a significant body of related literature in order to better illustrate our contributions to the research field.
Resumo:
Data is the most important asset of a company in the information age. Other assets, such as technology, facilities or products can be copied or reverse-engineered, employees can be brought over, but data remains unique to every company. As data management topics are slowly moving from unknown unknowns to known unknowns, tools to evaluate and manage data properly are developed and refined. Many projects are in progress today to develop various maturity models for evaluating information and data management practices. These maturity models come in many shapes and sizes: from short and concise ones meant for a quick assessment, to complex ones that call for an expert assessment by experienced consultants. In this paper several of them, made not only by external inter-organizational groups and authors, but also developed internally at a Major Energy Provider Company (MEPC) are juxtaposed and thoroughly analyzed. Apart from analyzing the available maturity models related to Data Management, this paper also selects the one with the most merit and describes and analyzes using it to perform a maturity assessment in MEPC. The utility of maturity models is two-fold: descriptive and prescriptive. Besides recording the current state of Data Management practices maturity by performing the assessments, this maturity model is also used to chart the way forward. Thus, after the current situation is presented, analysis and recommendations on how to improve it based on the definitions of higher levels of maturity are given. Generally, the main trend observed was the widening of the Data Management field to include more business and “soft” areas (as opposed to technical ones) and the change of focus towards business value of data, while assuming that the underlying IT systems for managing data are “ideal”, that is, left to the purely technical disciplines to design and maintain. This trend is not only present in Data Management but in other technological areas as well, where more and more attention is given to innovative use of technology, while acknowledging that the strategic importance of IT as such is diminishing.
Resumo:
The goal of this study is to create a new inventory valuation process for The Switch Drive Systems and to improve its inventory management practices. In the matter of inventories the main problems in the case company are that it doesn’t have consistent valuation methods throughout the company and that information received in ERP system isn’t trustful. The research is qualitative case study. The empirical data is gathered through observing and unstructured interviews. The research shows that material flow process and the inventory valuation must be divided and handled separately but they should interact with each other. The result is a new inventory valuation process which takes many factors of material process under the consideration in order to receive reliable value for inventories.
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
The accuracy of modelling of rotor systems composed of rotors, oil film bearings and a flexible foundation, is evaluated and discussed in this paper. The model validation of different models has been done by comparing experimental results with numerical results by means. The experimental data have been obtained with a fully instrumented four oil film bearing, two shafts test rig. The fault models are then used in the frame of a model based malfunction identification procedure, based on a least square fitting approach applied in the frequency domain. The capability of distinguishing different malfunctions has been investigated, even if they can create similar effects (such as unbalance, rotor bow, coupling misalignment and others) from shaft vibrations measured in correspondence of the bearings.
Resumo:
The necessity of EC (Electronic Commerce) and enterprise systems integration is perceived from the integrated nature of enterprise systems. The proven benefits of EC to provide competitive advantages to the organizations force enterprises to adopt and integrate EC with their enterprise systems. Integration is a complex task to facilitate seamless flow of information and data between different systems within and across enterprises. Different systems have different platforms, thus to integrate systems with different platforms and infrastructures, integration technologies, such as middleware, SOA (Service-Oriented Architecture), ESB (Enterprise Service Bus), JCA (J2EE Connector Architecture), and B2B (Business-to-Business) integration standards are required. Huge software vendors, such as Oracle, IBM, Microsoft, and SAP suggest various solutions to address EC and enterprise systems integration problems. There are limited numbers of literature about the integration of EC and enterprise systems in detail. Most of the studies in this area have focused on the factors which influence the adoption of EC by enterprise or other studies provide limited information about a specific platform or integration methodology in general. Therefore, this thesis is conducted to cover the technical details of EC and enterprise systems integration and covers both the adoption factors and integration solutions. In this study, many literature was reviewed and different solutions were investigated. Different enterprise integration approaches as well as most popular integration technologies were investigated. Moreover, various methodologies of integrating EC and enterprise systems were studied in detail and different solutions were examined. In this study, the influential factors to adopt EC in enterprises were studied based on previous literature and categorized to technical, social, managerial, financial, and human resource factors. Moreover, integration technologies were categorized based on three levels of integration, which are data, application, and process. In addition, different integration approaches were identified and categorized based on their communication and platform. Also, different EC integration solutions were investigated and categorized based on the identified integration approaches. By considering different aspects of integration, this study is a great asset to the architectures, developers, and system integrators in order to integrate and adopt EC with enterprise systems.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.
Resumo:
Vaikka liiketoimintatiedon hallintaa sekä johdon päätöksentekoa on tutkittu laajasti, näiden kahden käsitteen yhteisvaikutuksesta on olemassa hyvin rajallinen määrä tutkimustietoa. Tulevaisuudessa aiheen tärkeys korostuu, sillä olemassa olevan datan määrä kasvaa jatkuvasti. Yritykset tarvitsevat jatkossa yhä enemmän kyvykkyyksiä sekä resursseja, jotta sekä strukturoitua että strukturoimatonta tietoa voidaan hyödyntää lähteestä riippumatta. Nykyiset Business Intelligence -ratkaisut mahdollistavat tehokkaan liiketoimintatiedon hallinnan osana johdon päätöksentekoa. Aiemman kirjallisuuden pohjalta, tutkimuksen empiirinen osuus tunnistaa liiketoimintatiedon hyödyntämiseen liittyviä tekijöitä, jotka joko tukevat tai rajoittavat johdon päätöksentekoprosessia. Tutkimuksen teoreettinen osuus johdattaa lukijan tutkimusaiheeseen kirjallisuuskatsauksen avulla. Keskeisimmät tutkimukseen liittyvät käsitteet, kuten Business Intelligence ja johdon päätöksenteko, esitetään relevantin kirjallisuuden avulla – tämän lisäksi myös dataan liittyvät käsitteet analysoidaan tarkasti. Tutkimuksen empiirinen osuus rakentuu tutkimusteorian pohjalta. Tutkimuksen empiirisessä osuudessa paneudutaan tutkimusteemoihin käytännön esimerkein: kolmen tapaustutkimuksen avulla tutkitaan sekä kuvataan toisistaan irrallisia tapauksia. Jokainen tapaus kuvataan sekä analysoidaan teoriaan perustuvien väitteiden avulla – nämä väitteet ovat perusedellytyksiä menestyksekkäälle liiketoimintatiedon hyödyntämiseen perustuvalle päätöksenteolle. Tapaustutkimusten avulla alkuperäistä tutkimusongelmaa voidaan analysoida tarkasti huomioiden jo olemassa oleva tutkimustieto. Analyysin tulosten avulla myös yksittäisiä rajoitteita sekä mahdollistavia tekijöitä voidaan analysoida. Tulokset osoittavat, että rajoitteilla on vahvasti negatiivinen vaikutus päätöksentekoprosessin onnistumiseen. Toisaalta yritysjohto on tietoinen liiketoimintatiedon hallintaan liittyvistä positiivisista seurauksista, vaikka kaikkia mahdollisuuksia ei olisikaan hyödynnetty. Tutkimuksen merkittävin tulos esittelee viitekehyksen, jonka puitteissa johdon päätöksentekoprosesseja voidaan arvioida sekä analysoida. Despite the fact that the literature on Business Intelligence and managerial decision-making is extensive, relatively little effort has been made to research the relationship between them. This particular field of study has become important since the amount of data in the world is growing every second. Companies require capabilities and resources in order to utilize structured data and unstructured data from internal and external data sources. However, the present Business Intelligence technologies enable managers to utilize data effectively in decision-making. Based on the prior literature, the empirical part of the thesis identifies the enablers and constraints in computer-aided managerial decision-making process. In this thesis, the theoretical part provides a preliminary understanding about the research area through a literature review. The key concepts such as Business Intelligence and managerial decision-making are explored by reviewing the relevant literature. Additionally, different data sources as well as data forms are analyzed in further detail. All key concepts are taken into account when the empirical part is carried out. The empirical part obtains an understanding of the real world situation when it comes to the themes that were covered in the theoretical part. Three selected case companies are analyzed through those statements, which are considered as critical prerequisites for successful computer-aided managerial decision-making. The case study analysis, which is a part of the empirical part, enables the researcher to examine the relationship between Business Intelligence and managerial decision-making. Based on the findings of the case study analysis, the researcher identifies the enablers and constraints through the case study interviews. The findings indicate that the constraints have a highly negative influence on the decision-making process. In addition, the managers are aware of the positive implications that Business Intelligence has for decision-making, but all possibilities are not yet utilized. As a main result of this study, a data-driven framework for managerial decision-making is introduced. This framework can be used when the managerial decision-making processes are evaluated and analyzed.
Resumo:
The Swedish public health care organisation could very well be undergoing its most significant change since its specialisation during the late 19th and early 20th century. At the heart of this change is a move from using manual patient journals to electronic health records (EHR). EHR are complex integrated organisational wide information systems (IS) that promise great benefits and value as well as presenting great challenges to the organisation. The Swedish public health care is not the first organisation to implement integrated IS, and by no means alone in their quest for realising the potential benefits and value that it has to offer. As organisations invest in IS they embark on a journey of value-creation and capture. A journey where a costbased approach towards their IS-investments is replaced with a value-centric focus, and where the main challenges lie in the practical day-to-day task of finding ways to intertwine technology, people and business processes. This has however proven to be a problematic task. The problematic situation arises from a shift of perspective regarding how to manage IS in order to gain value. This is a shift from technology delivery to benefits delivery; from an ISimplementation plan to a change management plan. The shift gives rise to challenges related to the inability of IS and the elusiveness of value. As a response to these challenges the field of IS-benefits management has emerged offering a framework and a process in order to better understand and formalise benefits realisation activities. In this thesis the benefits realisation efforts of three Swedish hospitals within the same county council are studied. The thesis focuses on the participants of benefits analysis projects; their perceptions, judgments, negotiations and descriptions of potential benefits. The purpose is to address the process where organisations seek to identify which potential IS-benefits to pursue and realise, this in order to better understand what affects the process, so that realisation actions of potential IS-benefits could be supported. A qualitative case study research design is adopted and provides a framework for sample selection, data collection, and data analysis. It also provides a framework for discussions of validity, reliability and generalizability. Findings displayed a benefits fluctuation, which showed that participants’ perception of what constituted potential benefits and value changed throughout the formal benefits management process. Issues like structure, knowledge, expectation and experience affected perception differently, and this in the end changed the amount and composition of potential benefits and value. Five dimensions of benefits judgment were identified and used by participants when finding accommodations of potential benefits and value to pursue. Identified dimensions affected participants’ perceptions, which in turn affected the amount and composition of potential benefits. During the formal benefits management process participants shifted between judgment dimensions. These movements emerged through debates and interactions between participants. Judgments based on what was perceived as expected due to one’s role and perceived best for the organisation as a whole were the two dominant benefits judgment dimensions. A benefits negotiation was identified. Negotiations were divided into two main categories, rational and irrational, depending on participants’ drive when initiating and participating in negotiations. In each category three different types of negotiations were identified having different characteristics and generating different outcomes. There was also a benefits negotiation process identified that displayed management challenges corresponding to its five phases. A discrepancy was also found between how IS-benefits are spoken of and how actions of IS benefits realisation are understood. This was a discrepancy between an evaluation and a realisation focus towards IS value creation. An evaluation focus described IS-benefits as well-defined and measurable effects and a realisation focus spoke of establishing and managing an on-going place of value creation. The notion of valuescape was introduced in order to describe and support the understanding of IS value creation. Valuescape corresponded to a realisation focus and outlined a value configuration consisting of activities, logic, structure, drivers and role of IS.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014