937 resultados para integration of modalities


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The importance of the regional level in research has risen in the last few decades and a vast literature in the fields of, for instance, evolutionary and institutional economics, network theories, innovations and learning systems, as well as sociology, has focused on regional level questions. Recently the policy makers and regional actors have also began to pay increasing attention to the knowledge economy and its needs, in general, and the connectivity and support structures of regional clusters in particular. Nowadays knowledge is generally considered as the most important source of competitive advantage, but even the most specialised forms of knowledge are becoming a short-lived resource for example due to the accelerating pace of technological change. This emphasizes the need of foresight activities in national, regional and organizational levels and the integration of foresight and innovation activities. In regional setting this development sets great challenges especially in those regions having no university and thus usually very limited resources for research activities. Also the research problem of this dissertation is related to the need to better incorporate the information produced by foresight process to facilitate and to be used in regional practice-based innovation processes. This dissertation is a constructive case study the case being Lahti region and a network facilitating innovation policy adopted in that region. Dissertation consists of a summary and five articles and during the research process a construct or a conceptual model for solving this real life problem has been developed. It is also being implemented as part of the network facilitating innovation policy in the Lahti region.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The possibilities and expansion of the use of Web 2.0 has opened up a world of possibilities in online learning. In spite of the integration of these tools in education major changes are required in the educational design of instructional processes.This paper presents an educational experience conducted by the Open University of Catalonia using the social network Facebook for the purpose of testing a learning model that uses a participation and collaboration methodology among users based on the use of open educational resources.- The aim of the experience is to test an Open Social Learning (OSL) model, understood to be a virtual learning environment open to the Internet community, based on the use of open resources and on a methodology focused on the participation and collaboration of users in the construction of knowledge.- The topic chosen for this experience in Facebook was 2.0 Journeys: online tools and resources. The objective of this 5 weeks course was to provide students with resources for managing the various textual, photographic, audiovisual and multimedia materials resulting from a journey.- The most important changes in the design and development of a course based on OSL are the role of the teacher, the role of the student, the type of content and the methodology:- The teacher mixes with the participants, guiding them and offering the benefit of his/her experience and knowledge.- Students learn through their participation and collaboration with a mixed group of users.- The content is open and editable under different types of license that specify the level of accessibility.- The methodology of the course was based on the creation of a learning community able to self-manage its learning process. For this a facilitator was needed and also a central activity was established for people to participate and contribute in the community.- We used an ethnographic methodology and also questionnaires to students in order to acquire results regarding the quality of this type of learning experience.- Some of the data obtained raised questions to consider for future designs of educational situations based on OSL:- Difficulties in breaking the facilitator-centred structure- Change in the time required to adapt to the system and to achieve the objectives- Lack of commitment with free courses- The trend to return to traditional ways of learning- Accreditation- This experience has taught all of us that education can happen any time and in any place but not in any way.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Last two decades have seen a rapid change in the global economic and financial situation; the economic conditions in many small and large underdeveloped countries started to improve and they became recognized as emerging markets. This led to growth in the amounts of global investments in these countries, partly spurred by expectations of higher returns, favorable risk-return opportunities, and better diversification alternatives to global investors. This process, however, has not been without problems and it has emphasized the need for more information on these markets. In particular, the liberalization of financial markets around the world, globalization of trade and companies, recent formation of economic and regional blocks, and the rapid development of underdeveloped countries during the last two decades have brought a major challenge to the financial world and researchers alike. This doctoral dissertation studies one of the largest emerging markets, namely Russia. The motivation why the Russian equity market is worth investigating includes, among other factors, its sheer size, rapid and robust economic growth since the turn of the millennium, future prospect for international investors, and a number of important major financial reforms implemented since the early 1990s. Another interesting feature of the Russian economy, which gives motivation to study Russian market, is Russia’s 1998 financial crisis, considered as one of the worst crisis in recent times, affecting both developed and developing economies. Therefore, special attention has been paid to Russia’s 1998 financial crisis throughout this dissertation. This thesis covers the period from the birth of the modern Russian financial markets to the present day, Special attention is given to the international linkage and the 1998 financial crisis. This study first identifies the risks associated with Russian market and then deals with their pricing issues. Finally some insights about portfolio construction within Russian market are presented. The first research paper of this dissertation considers the linkage of the Russian equity market to the world equity market by examining the international transmission of the Russia’s 1998 financial crisis utilizing the GARCH-BEKK model proposed by Engle and Kroner. Empirical results shows evidence of direct linkage between the Russian equity market and the world market both in regards of returns and volatility. However, the weakness of the linkage suggests that the Russian equity market was only partially integrated into the world market, even though the contagion can be clearly seen during the time of the crisis period. The second and the third paper, co-authored with Mika Vaihekoski, investigate whether global, local and currency risks are priced in the Russian stock market from a US investors’ point of view. Furthermore, the dynamics of these sources of risk are studied, i.e., whether the prices of the global and local risk factors are constant or time-varying over time. We utilize the multivariate GARCH-M framework of De Santis and Gérard (1998). Similar to them we find price of global market risk to be time-varying. Currency risk also found to be priced and highly time varying in the Russian market. Moreover, our results suggest that the Russian market is partially segmented and local risk is also priced in the market. The model also implies that the biggest impact on the US market risk premium is coming from the world risk component whereas the Russian risk premium is on average caused mostly by the local and currency components. The purpose of the fourth paper is to look at the relationship between the stock and the bond market of Russia. The objective is to examine whether the correlations between two classes of assets are time varying by using multivariate conditional volatility models. The Constant Conditional Correlation model by Bollerslev (1990), the Dynamic Conditional Correlation model by Engle (2002), and an asymmetric version of the Dynamic Conditional Correlation model by Cappiello et al. (2006) are used in the analysis. The empirical results do not support the assumption of constant conditional correlation and there was clear evidence of time varying correlations between the Russian stocks and bond market and both asset markets exhibit positive asymmetries. The implications of the results in this dissertation are useful for both companies and international investors who are interested in investing in Russia. Our results give useful insights to those involved in minimising or managing financial risk exposures, such as, portfolio managers, international investors, risk analysts and financial researchers. When portfolio managers aim to optimize the risk-return relationship, the results indicate that at least in the case of Russia, one should account for the local market as well as currency risk when calculating the key inputs for the optimization. In addition, the pricing of exchange rate risk implies that exchange rate exposure is partly non-diversifiable and investors are compensated for bearing the risk. Likewise, international transmission of stock market volatility can profoundly influence corporate capital budgeting decisions, investors’ investment decisions, and other business cycle variables. Finally, the weak integration of the Russian market and low correlations between Russian stock and bond market offers good opportunities to the international investors to diversify their portfolios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Grande Coupure represents a major terrestrial faunal turnover recorded in Eurasia associated with the overall climate shift at the Eocene-Oligocene transition. During this event, a large number of European Eocene endemic mammals became extinct and new Asian immigrants appeared. The absolute age of the Grande Coupure, however, has remained controversial for decades. The Late Eocene-Oligocene continental record of the Eastern Ebro Basin (NE Spain) constitutes a unique opportunity to build a robust magnetostratigraphy- based chronostratigraphy which can contribute with independent age constraints for this important turnover. This study presents new magnetostratigraphic data of a 495-m-thick section (Moià-Santpedor) that ranges from 36.1 Ma to 33.3 Ma. The integration of the new results with previous litho- bio- and magnetostratigraphic records of the Ebro Basin yields accurate ages for the immediately pre- and post-Grand Coupure mammal fossil assemblages found in the study area, bracketing the Grande Coupure to an age embracing the Eocene-Oligocene transition, with a maximum allowable lag of 0.5 Myr with respect to this boundary. The shift to drier conditions that accompanied the global cooling at the Eocene-Oligocene transition probably determined the sedimentary trends in the Eastern Ebro Basin. The occurrence and expansion of an amalgamated-channel sandstone unit is interpreted as the forced response of the fluvial fan system to the transient retraction of the central-basin lake systems. The new results from the Ebro Basin allow us to revisit correlations for the controversial Eocene-Oligocene record of the Hampshire Basin (Isle of Wight, UK), and their implications for the calibration of the Mammal Palaeogene reference levels MP18 to MP21.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a new numerical program able to model syntectonic sedimentation. The new model combines a discrete element model of the tectonic deformation of a sedimentary cover and a process-based model of sedimentation in a single framework. The integration of these two methods allows us to include the simulation of both sedimentation and deformation processes in a single and more effective model. The paper describes briefly the antecedents of the program, Simsafadim-Clastic and a discrete element model, in order to introduce the methodology used to merge both programs to create the new code. To illustrate the operation and application of the program, analysis of the evolution of syntectonic geometries in an extensional environment and also associated with thrust fault propagation is undertaken. Using the new code, much more complex and realistic depositional structures can be simulated together with a more complex analysis of the evolution of the deformation within the sedimentary cover, which is seen to be affected by the presence of the new syntectonic sediments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The integration of ecological and evolutionary data is highly valuable for conservation planning. However, it has been rarely used in the marine realm, where the adequate design of marine protected areas (MPAs) is urgently needed. Here, we examined the interacting processes underlying the patterns of genetic structure and demographic strucuture of a highly vulnerable Mediterranean habitat-forming species (i.e. Paramuricea clavata (Risso, 1826)), with particular emphasis on the processes of contemporary dispersal, genetic drift, and colonization of a new population. Isolation by distance and genetic discontinuities were found, and three genetic clusters were detected; each submitted to variations in the relative impact of drift and gene flow. No founder effect was found in the new population. The interplay of ecology and evolution revealed that drift is strongly impacting the smallest, most isolated populations, where partial mortality of individuals was highest. Moreover, the eco-evolutionary analyses entailed important conservation implications for P. clavata. Our study supports the inclusion of habitat-forming organisms in the design of MPAs and highlights the need to account for genetic drift in the development of MPAs. Moreover, it reinforces the importance of integrating genetic and demographic data in marine conservation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Asian rust of soybean [Glycine max (L.) Merril] is one of the most important fungal diseases of this crop worldwide. The recent introduction of Phakopsora pachyrhizi Syd. & P. Syd in the Americas represents a major threat to soybean production in the main growing regions, and significant losses have already been reported. P. pachyrhizi is extremely aggressive under favorable weather conditions, causing rapid plant defoliation. Epidemiological studies, under both controlled and natural environmental conditions, have been done for several decades with the aim of elucidating factors that affect the disease cycle as a basis for disease modeling. The recent spread of Asian soybean rust to major production regions in the world has promoted new development, testing and application of mathematical models to assess the risk and predict the disease. These efforts have included the integration of new data, epidemiological knowledge, statistical methods, and advances in computer simulation to develop models and systems with different spatial and temporal scales, objectives and audience. In this review, we present a comprehensive discussion on the models and systems that have been tested to predict and assess the risk of Asian soybean rust. Limitations, uncertainties and challenges for modelers are also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study focuses on the integration of eco-innovation principles into strategy and policy at the regional level. The importance of regions as a level for integrating eco-innovative programs and activities served as the point of interest for this study. Eco-innovative activities and technologies are seen as means to meet sustainable development objective of improving regions’ quality of life. This study is conducted to get an in-depth understanding and learning about eco-innovation at regional level, and to know the basic concepts that are important in integrating eco-innovation principles into regional policy. Other specific objectives of this study are to know how eco-innovation are developed and practiced in the regions of the EU, and to analyze the main characteristic features of an eco-innovation model that is specifically developed at Päijät-Häme Region in Finland. Paijät-Häme Region is noted for its successful eco-innovation strategies and programs, hence, taken as casework in this study. Both primary (interviews) and secondary data (publicly available documents) are utilized in this study. The study shows that eco-innovation plays an important role in regional strategy as reviewed based on the experience of other regions in the EU. This is because of its localized nature which makes it easier to facilitate in a regional setting. Since regional authorities and policy-makers are normally focused on solving its localized environmental problems, eco-innovation principles can easily be integrated into regional strategy. The case study highlights Päijät-Häme Region’s eco-innovation strategies and projects which are characterized by strong connection of knowledge-producing institutions. Policy instruments supporting eco-innovation (e.g. environmental technologies) are very much focused on clean technologies, hence, justifying the formation of cleantech clusters and business parks in Päijät-Häme Region. A newly conceptualized SAMPO model of eco-innovation has been developed in Päijät-Häme Region to better capture the region’s characteristics and to eventually replace the current model employed by the Päijät-Häme Regional Authority. The SAMPO model is still under construction, however, review of its principles points to some of its three important spearheads – practice-based innovation, design (eco-design) and clean technology or environmental technology (environment).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT The integration of fish farming in intensive system and plant production, called "aquaponics" is practiced successfully in countries like the USA, Australia and Europe. In Brazil, this integration has attracted the attention of researchers and producers. In this context, the aim of this study was to evaluate the effect of two substrates (crushed stone number 3, CS III and flexible polyurethane foam, FPF) on the production of aquaponic lettuce, moreover, to show that the residual water from intensive tilapia production provides sufficient qualitative characteristics for competitive production of lettuce without adding commercial fertilizers. The treatment in which FPF was used provided higher concentrations of macro and micronutrients in the shoots, higher production of fresh matter of shoots (95.48 g plant-1) and a larger number of leaves (14.90) relative to CS III. These results were attributed to the lower post-transplanting stress and the higher water retention time provided by the FPF. The residual water from tilapia intensive farming can provide sufficient nutrients for the production of lettuce, making the supplementary fertilization with commercial products unnecessary. Thus, the FPF presents the most suitable conditions to be used as substrate in aquaponics system with recirculation of the residual water from the intensive tilapia farming.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The environmental aspect of corporate social responsibility (CSR) expressed through the process of the EMS implementation in the oil and gas companies is identified as the main subject of this research. In the theoretical part, the basic attention is paid to justification of a link between CSR and environmental management. The achievement of sustainable competitive advantage as a result of environmental capital growth and inclusion of the socially responsible activities in the corporate strategy is another issue that is of special significance here. Besides, two basic forms of environmental management systems (environmental decision support systems and environmental information management systems) are explored and their role in effective stakeholder interaction is tackled. The most crucial benefits of EMS are also analyzed to underline its importance as a source of sustainable development. Further research is based on the survey of 51 sampled oil and gas companies (both publicly owned and state owned ones) originated from different countries all over the world and providing reports on sustainability issues in the open access. To analyze their approach to sustainable development, a specifically designed evaluation matrix with 37 indicators developed in accordance with the General Reporting Initiative (GRI) guidelines for non-financial reporting was prepared. Additionally, the quality of environmental information disclosure was measured on the basis of a quality – quantity matrix. According to results of research, oil and gas companies prefer implementing reactive measures to the costly and knowledge-intensive proactive techniques for elimination of the negative environmental impacts. Besides, it was identified that the environmental performance disclosure is mostly rather limited, so that the quality of non-financial reporting can be judged as quite insufficient. In spite of the fact that most of the oil and gas companies in the sample claim the EMS to be embedded currently in their structure, they often do not provide any details for the process of their implementation. As a potential for the further development of EMS, author mentions possible integration of their different forms in a single entity, extension of existing structure on the basis of consolidation of the structural and strategic precautions as well as development of a unified certification standard instead of several ones that exist today in order to enhance control on the EMS implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unsuccessful mergers are unfortunately the rule rather than the exception. Therefore it is necessary to gain an enhanced understanding of mergers and post-merger integrations (PMI) as well as learning more about how mergers and PMIs of information systems (IS) and people can be facilitated. Studies on PMI of IS are scarce and public sector mergers are even less studied. There is nothing however to indicate that public sector mergers are any more successful than those in the private sector. This thesis covers five studies carried out between 2008 and 2011 in two organizations in higher education that merged in January 2010. The most recent study was carried out two years after the new university was established. The longitudinal case-study focused on the administrators and their opinions of the IS, the work situation and the merger in general. These issues were investigated before, during and after the merger. Both surveys and interviews were used to collect data, to which were added documents that both describe and guide the merger process; in this way we aimed at a triangulation of findings. Administrators were chosen as the focus of the study since public organizations are highly dependent on this staff category, forming the backbone of the organization and whose performance is a key success factor for the organization. Reliable and effective IS are also critical for maintaining a functional and effective organization, and this makes administrators highly dependent on their organizations’ IS for the ability to carry out their duties as intended. The case-study has confirmed the administrators’ dependency on IS that work well. A merger is likely to lead to changes in the IS and the routines associated with the administrators’ work. Hence it was especially interesting to study how the administrators viewed the merger and its consequences for IS and the work situation. The overall research objective is to find key issues for successful mergers and PMIs. The first explorative study in 2008 showed that the administrators were confident of their skills and knowledge of IS and had no fear of having to learn new IS due to the merger. Most administrators had an academic background and were not anxious about whether IS training would be given or not. Before the merger the administrators were positive and enthusiastic towards the merger and also to the changes that they expected. The studies carried out before the merger showed that these administrators were very satisfied with the information provided about the merger. This information was disseminated through various channels and even negative information and postponed decisions were quickly distributed. The study conflicts with the theories that have found that resistance to change is inevitable in a merger. Shortly after the merger the (third) study showed disappointment with the fact that fewer changes than expected had been implemented even if the changes that actually were carried out sometimes led to a more problematic work situation. This was seen to be more prominent for routine changes than IS changes. Still the administrators showed a clear willingness to change and to share their knowledge with new colleagues. This knowledge sharing (also tacit) worked well in the merger and the PMI. The majority reported that the most common way to learn to use new ISs and to apply new routines was by asking help from colleagues. They also needed to take responsibility for their own training and development. Five months after the merger (the fourth study) the administrators had become worried about the changes in communication strategy that had been implemented in the new university. This was perceived as being more anonymous. Furthermore, it was harder to get to know what was happening and to contact the new decision makers. The administrators found that decisions, and the authority to make decisions, had been moved to a higher administrative level than they were accustomed to. A directive management style is recommended in mergers in order to achieve a quick transition without distracting from the core business. A merger process may be tiresome and require considerable effort from the participants. In addition, not everyone can make their voice heard during a merger and consensus is not possible in every question. It is important to find out what is best for the new organization instead of simply claiming that the tried and tested methods of doing things should be implemented. A major problem turned out to be the lack of management continuity during the merger process. Especially problematic was the situation in the IS-department with many substitute managers during the whole merger process (even after the merger was carried out). This meant that no one was in charge of IS-issues and the PMI of IS. Moreover, the top managers were appointed very late in the process; in some cases after the merger was carried out. This led to missed opportunities for building trust and management credibility was heavily affected. The administrators felt neglected and that their competences and knowledge no longer counted. This, together with a reduced and altered information flow, led to rumours and distrust. Before the merger the administrators were convinced that their achievements contributed value to their organizations and that they worked effectively. After the merger they were less sure of their value contribution and effectiveness even if these factors were not totally discounted. The fifth study in November 2011 found that the administrators were still satisfied with their IS as they had been throughout the whole study. Furthermore, they believed that the IS department had done a good job despite challenging circumstances. Both the former organizations lacked IS strategies, which badly affected the IS strategizing during the merger and the PMI. IS strategies deal with issues like system ownership; namely who should pay and who is responsible for maintenance and system development, for organizing system training for new IS, and for effectively run IS even during changing circumstances (e.g. more users). A proactive approach is recommended for IS strategizing to work. This is particularly true during a merger and PMI for handling issues about what ISs should be adopted and implemented in the new organization, issues of integration and reengineering of IS-related processes. In the new university an ITstrategy had still not been decided 26 months after the new university was established. The study shows the importance of the decisive management of IS in a merger requiring that IS issues are addressed in the merger process and that IS decisions are made early. Moreover, the new management needs to be appointed early in order to work actively with the IS-strategizing. It is also necessary to build trust and to plan and make decisions about integration of IS and people.