1000 resultados para 7038-214
Resumo:
This thesis studies the properties and usability of operators called t-norms, t-conorms, uninorms, as well as many valued implications and equivalences. Into these operators, weights and a generalized mean are embedded for aggregation, and they are used for comparison tasks and for this reason they are referred to as comparison measures. The thesis illustrates how these operators can be weighted with a differential evolution and aggregated with a generalized mean, and the kinds of measures of comparison that can be achieved from this procedure. New operators suitable for comparison measures are suggested. These operators are combination measures based on the use of t-norms and t-conorms, the generalized 3_-uninorm and pseudo equivalence measures based on S-type implications. The empirical part of this thesis demonstrates how these new comparison measures work in the field of classification, for example, in the classification of medical data. The second application area is from the field of sports medicine and it represents an expert system for defining an athlete's aerobic and anaerobic thresholds. The core of this thesis offers definitions for comparison measures and illustrates that there is no actual difference in the results achieved in comparison tasks, by the use of comparison measures based on distance, versus comparison measures based on many valued logical structures. The approach has been highly practical in this thesis and all usage of the measures has been validated mainly by practical testing. In general, many different types of operators suitable for comparison tasks have been presented in fuzzy logic literature and there has been little or no experimental work with these operators.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
Tietojärjestelmien strateginen kehittäminen on ollut niin liike-elämän kiinnostuksen kuin akateemisen tutkimuksenkin kohteena jo pitkään. Aihe on edelleen ajankohtainen, eikä ajankohtaisuus osoita laantumisen merkkejä. Valtaosa tutkimuksesta on kohdistunut suuryrityksiin ja jonkin verran pk-yrityksiin yhtenä ryhmänä. Tutkimus on kohdistunut aiheeseen tarkastelemalla onko tietojärjestelmien ja liiketoimintastrategian välillä yhteyttä, ja jos, onko sillä merkitystä yrityksen menestykselle. Tämän tutkimuksen tavoitteena oli pureutua syvemmälle siihen tapaan, jolla yritys kytkee liiketoimintastrategiansa tietojärjestelmähankintaan. Kohdetietojärjestelmäksi tutkimuksessa valittiin toiminnanohjausjärjestelmät ja yrityskooksi keskisuuret yritykset. Toiminnanohjausjärjestelmän valintaa voidaan perustella sen kattavuudella yrityksen toiminnassa eri sidosryhmien suunnasta katsottaessa. Keskisuuret yritykset valittiin osittain sen vuoksi, koska niistä ei ole paljoakaan omaa tutkimusta, osittain sen vuoksi, että niiden luonne kehityspolun epäjatkumokohdassa haluttiin nostaa esille. Tutkimuksen empiirinen osa pohjautuu kolmeen tutkimustapaukseen, kolmen keskisuuren yrityksen toiminnanohjaushankkeeseen. Tapausyritysten liiketoiminta- ja ITjohtoa haastateltiin, ja lisäksi käytettiin kaikkea saataville ollutta kirjallista arkistoaineistoa. Tutkimuksen päämenetelmä oli Grounded Theory (GT). Tutkimustavassa ei, poiketen useista muista laadullisista tutkimustavoista, ole ennalta määrättyä viitekehystä, vaan tutkimus lähtee liikkeelle ns. puhtaalta pöydältä ilman esioletuksia luoden tutkimusprosessin kuluessa uutta teoriaa. Tutkimustavan etuna voidaan pitää ennakkoluulottomuutta. Aikaisemman tutkimuksen rooli tässä tutkimuksessa oli ensinnäkin kartoittaa tutkimusaluetta ja olla johdantona tutkimukselle, toisaalta tuloksia vertailtiin aikaisempaan tutkimukseen. Tutkimusaineisto analysoitiin tarkasti. Ensin aineisto käytiin läpi koodaamalla se aineistosta esille nousseiden käsitteiden alle. Seuraavaksi saatu tulos luokiteltiin aineistosta nousseisiin luokkiin sekä muodostettiin käsitteiden ja luokkien väliset yhteydet. Lopuksi muodostumassa oleva teoria konkretisoitiin 18 hypoteesin ja hypoteesit yhdistävän mallin avulla. Tutkimuksessa saatiin uutta tietoa siitä, miten vaatimusprosessi etenee ja miten keskisuuren yrityksen ominaispiirteet vaikuttavat vaatimusprosessiin. Lisäksi saatiin uutta tietoa siitä, mitkä ovat kriittisiä menestystekijöitä keskisuuren yrityksen toiminnanohjaushankkeessa. Keskisuuretyritykset näyttävät tämän tutkimuksen aineiston perusteella osaavan asettaa vaatimuksia toiminnanohjausjärjestelmälle liiketoimintastrategian, prosessikuvausten ja toiminnanohjausjärjestelmien tarjoamien mahdollisuuksien suunnista. Vaatimusten toteutumisenarviointi on keskisuurilla yrityksillä epäsystemaattista, kuten aikaisemmassa tutkimuksessa on todettu olevan myös suurilla yrityksillä. Resurssivaje vaikeuttaa toiminnanohjausjärjestelmän liiketoimintastrategisten vaatimusten toteutumista. Tutkimusaineiston perusteella voidaan päätellä, että keskisuurissa yrityksissä – huolimatta siitä, että niissä on olemassa prosessikuvaukset – ei ole varsinaista prosessiajattelua tai prosessijohtamista. Keskisuuret yritykset eivät tämän tutkimuksen aineiston perusteella näytä muodostavan poikkeusta, vaan mittariston käyttö on epäkypsää, eikä mittaristoa pystytä hyödyntämään tietojärjestelmähankkeen tukena. Tulosten pohjalta voidaan arvioida, että strategisen johtamisen komponentit ovat keskisuuressa yrityksessä toisistaan irrallisia. Vaikka toiminnanohjaushankkeen vaatimuksia asetettaessa liiketoimintastrategia on vahvasti mukana, jää vaatimuksenasettelu karkealle tasolle, eikä konkretisoidu prosesseihin ja mittarointiin. Sen seurauksena toiminnanohjausjärjestelmästä ei saada kaikkea sitä hyötyä ja tukea liiketoimintastrategialle, mikä olisi mahdollista. Tutkimuksessa tuli myös esille resurssi- ja osaamispuutteen haittavaikutuksia keskisuurten yritysten toiminnanohjaushankkeiden vaatimusprosessissa. Puutteet yhdistettynä hajallaan oleviin strategisen johtamisen komponentteihin muodostavat yhdessä vaikean lähtökohdan strategiselle toiminnanohjaushankkeelle.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
This dissertation analyses the growing pool of copyrighted works, which are offered to the public using Creative Commons licensing. The study consist of analysis of the novel licensing system, the licensors, and the changes of the "all rights reserved" —paradigm of copyright law. Copyright law reserves all rights to the creator until seventy years have passed since her demise. Many claim that this endangers communal interests. Quite often the creators are willing to release some rights. This, however, is very difficult to do and needs help of specialized lawyers. The study finds that the innovative Creative Commons licensing scheme is well suited for low value - high volume licensing. It helps to reduce transaction costs on several le¬vels. However, CC licensing is not a "silver bullet". Privacy, moral rights, the problems of license interpretation and license compatibility with other open licenses and collecting societies remain unsolved. The study consists of seven chapters. The first chapter introduces the research topic and research questions. The second and third chapters inspect the Creative Commons licensing scheme's technical, economic and legal aspects. The fourth and fifth chapters examine the incentives of the licensors who use open licenses and describe certain open business models. The sixth chapter studies the role of collecting societies and whether two institutions, Creative Commons and collecting societies can coexist. The final chapter summarizes the findings. The dissertation contributes to the existing literature in several ways. There is a wide range of prior research on open source licensing. However, there is an urgent need for an extensive study of the Creative Commons licensing and its actual and potential impact on the creative ecosystem.
Resumo:
Trust in inter-organizational collaborative relationships has attracted substantial research interest among academics and practitioners. Previous studies have concentrated on the benefits of trust to business outcomes and economic performance, as it is considered to be a source of competitive advantage. Despite this increased level of interest, there is no consensus, much less overall agreement, about how it should be conceptualized or about the number of dimensions it incorporates. On the inter-organizational level there is an obvious challenge in defining both the trusting party and the objects of trust. Thus, the notion of trust as an under-theorized and poorly understood phenomenon still holds. Hence, the motivation of this study was fuelled by the need to increase our knowledge and understanding of the role and nature of trust in inter-organizational collaborative relationships. It is posited that there is a call for more understanding about its antecedents and consequences, as well as about the very concept in inter-organizational collaborative relationships. The study is divided into two parts. The first part gives a general overview, and the second part comprises four research publications. Both qualitative and quantitative research methodology is utilized. A multi-method research design was used because it provides different levels of data and different perspectives on the phenomenon. The results of this study reveal that trust incorporates three dimensions on both the individual and the organizational level: capability, goodwill, and self-reference. Trust develops from the reputation and behavior of the trusted party. It appears from this study that trust is clearly directed towards both individual boundary spanners and the counterpart company itself – i.e. not only to one or the other. The trusting party, on the other hand, is always an individual, and not the organization per se. Trust increases collaboration benefits and lowers collaboration drawbacks, thus having a positive effect on relationship performance. The major contribution of this study lies in uncovering the critical points and drawbacks in prior research and thereby in responding to the highlighted challenges. The way in which these challenges were addressed offers contributions to three major issues in the emerging theory of trust in the inter-organizational context: firstly, this study clarifies the trustor-trustee discussion; secondly, it conceptualizes trust as existing on both individual and organizational levels; and thirdly, it provides more information about the antecedents of trust and the ways in which it affects relationship performance.
Resumo:
This thesis is focused on process intensification. Several significant problems and applications of this theme are covered. Process intensification is nowadays one of the most popular trends in chemical engineering and attempts have been made to develop a general, systematic methodology for intensification. This seems, however, to be very difficult, because intensified processes are often based on creativity and novel ideas. Monolith reactors and microreactors are successful examples of process intensification. They are usually multichannel devices in which a proper feed technique is important for creating even fluid distribution into the channels. Two different feed techniques were tested for monoliths. In the first technique a shower method was implemented by means of perforated plates. The second technique was a dispersion method using static mixers. Both techniques offered stable operation and uniform fluid distribution. The dispersion method enabled a wider operational range in terms of liquid superficial velocity. Using dispersion method, a volumetric gas-liquid mass transfer coefficient of 2 s-1 was reached. Flow patterns play a significant role in terms of the mixing performance of micromixers. Although the geometry of a T-mixer is simple, channel configurations and dimensions had a clear effect on mixing efficiency. The flow in the microchannel was laminar, but the formation of vortices promoted mixing in micro T-mixers. The generation of vortices was dependent on the channel dimensions, configurations and flow rate. Microreactors offer a high ratio of surface area to volume. Surface forces and interactions between fluids and surfaces are, therefore, often dominant factors. In certain cases, the interactions can be effectively utilised. Different wetting properties of solid materials (PTFE and stainless steel) were applied in the separation of immiscible liquid phases. A micro-scale plate coalescer with hydrophilic and hydrophobic surfaces was used for the continuous separation of organic and aqueous phases. Complete phase separation occurred in less than 20 seconds, whereas the separation time by settling exceeded 30 min. Fluid flows can be also intensified in suitable conditions. By adding certain additives into turbulent fluid flow, it was possible to reduce friction (drag) by 40 %. Drag reduction decreases frictional pressure drop in pipelines which leads to remarkable energy savings and decreases the size or number of pumping facilities required, e.g., in oil transport pipes. Process intensification enables operation often under more optimal conditions. The consequent cost savings from reduced use of raw materials and reduced waste lead to greater economic benefits in processing.
Resumo:
Value chain collaboration has been a prevailing topic for research, and there is a constantly growing interest in developing collaborative models for improved efficiency in logistics. One area of collaboration is demand information management, which enables improved visibility and decrease of inventories in the value chain. Outsourcing of non-core competencies has changed the nature of collaboration from intra-enterprise to cross-enterprise activity, and this together with increasing competition in the globalizing markets have created a need for methods and tools for collaborative work. The retailer part in the value chain of consumer packaged goods (CPG) has been studied relatively widely, proven models have been defined, and there exist several best practice collaboration cases. The information and communications technology has developed rapidly, offering efficient solutions and applications to exchange information between value chain partners. However, the majority of CPG industry still works with traditional business models and practices. This concerns especially companies operating in the upstream of the CPG value chain. Demand information for consumer packaged goods originates at retailers' counters, based on consumers' buying decisions. As this information does not get transferred along the value chain towards the upstream parties, each player needs to optimize their part, causing safety margins for inventories and speculation in purchasing decisions. The safety margins increase with each player, resulting in a phenomenon known as the bullwhip effect. The further the company is from the original demand information source, the more distorted the information is. This thesis concentrates on the upstream parts of the value chain of consumer packaged goods, and more precisely the packaging value chain. Packaging is becoming a part of the product with informative and interactive features, and therefore is not just a cost item needed to protect the product. The upstream part of the CPG value chain is distinctive, as the product changes after each involved party, and therefore the original demand information from the retailers cannot be utilized as such – even if it were transferred seamlessly. The objective of this thesis is to examine the main drivers for collaboration, and barriers causing the moderate adaptation level of collaborative models. Another objective is to define a collaborative demand information management model and test it in a pilot business situation in order to see if the barriers can be eliminated. The empirical part of this thesis contains three parts, all related to the research objective, but involving different target groups, viewpoints and research approaches. The study shows evidence that the main barriers for collaboration are very similar to the barriers in the lower part of the same value chain; lack of trust, lack of business case and lack of senior management commitment. Eliminating one of them – the lack of business case – is not enough to eliminate the two other barriers, as the operational model in this thesis shows. The uncertainty of the future, fear of losing an independent position in purchasing decision making and lack of commitment remain strong enough barriers to prevent the implementation of the proposed collaborative business model. The study proposes a new way of defining the value chain processes: it divides the contracting and planning process into two processes, one managing the commercial parts and the other managing the quantity and specification related issues. This model can reduce the resistance to collaboration, as the commercial part of the contracting process would remain the same as in the traditional model. The quantity/specification-related issues would be managed by the parties with the best capabilities and resources, as well as access to the original demand information. The parties in between would be involved in the planning process as well, as their impact for the next party upstream is significant. The study also highlights the future challenges for companies operating in the CPG value chain. The markets are becoming global, with toughening competition. Also, the technology development will most likely continue with a speed exceeding the adaptation capabilities of the industry. Value chains are also becoming increasingly dynamic, which means shorter and more agile business relationships, and at the same time the predictability of consumer demand is getting more difficult due to shorter product life cycles and trends. These changes will certainly have an effect on companies' operational models, but it is very difficult to estimate when and how the proven methods will gain wide enough adaptation to become standards.
Resumo:
The purpose of this dissertation is to analyse older consumers' adoption of information and communication technology innovations, assess the effect of aging related characteristic, and evaluate older consumers' willingness to apply these technologies in health care services. This topic is considered important, because the population in Finland (as in other welfare states) is aging and thus offers a possibility for marketers, but on the other hand threatens society with increasing costs for healthcare. Innovation adoption has been under research from several aspects in both organizational and consumer research. In the consumer behaviour, several theories have been developed to predict consumer responses to innovation. The present dissertation carefully reviews previous research and takes a closer look at the theory of planned behaviour, technology acceptance model and diffusion of innovations perspective. It is here suggested that there is a possibility that these theories can be combined and complemented to predict the adoption of ICT innovations among aging consumers, taking the aging related personal characteristics into account. In fact, there are very few studies that have concentrated on aging consumers in the innovation research, and thus there was a clear indent for the present research. ICT in the health care context has been studied mainly from the organizational point of view. If the technology is thus applied for the communication between the individual end-user and service provider, the end-user cannot be shrugged off. The present dissertation uses empirical evidence from a survey targeted to 55-79 year old people from one city in Southern-Carelia. The empirical analysis of the research model was mainly based on structural equation modelling that has been found very useful on estimating causal relationships. The tested models were targeted to predict the adoption stage of personal computers and mobile phones, and the adoption intention of future health services that apply these devices for communication. The present dissertation succeeded in modelling the adoption behaviour of mobile phones and PCs as well as adoption intentions of future services. Perceived health status and three components behind it (depression, functional ability, and cognitive ability) were found to influence perception of technology anxiety. Better health leads to less anxiety. The effect of age was assessed as a control variable, in order to evaluate its effect compared to health characteristics. Age influenced technology perceptions, but to lesser extent compared to health. The analyses suggest that the major determinant for current technology adoption is perceived behavioural control, and additionally technology anxiety that indirectly inhibit adoption through perceived control. When focusing on future service intentions, the key issue is perceived usefulness that needs to be highlighted when new services are launched. Besides usefulness, the perception of online service reliability is important and affects the intentions indirectly. To conclude older consumers' adoption behaviour is influenced by health status and age, but also by the perceptions of anxiety and behavioural control. On the other hand, launching new types of health services for aging consumers is possible after the service is perceived reliable and useful.
Resumo:
The study examines international cooperation in product development in software development organisations. The software industry is known for its global nature and knowledge-intensity, which makes it an interesting setting to examine international cooperation in. Software development processes are increasingly distributed worldwide, but for small or even medium-sized enterprises, typical for the software industry, such distribution of operations is often possible only in association with crossing the company’s boundaries. The strategic decision-making of companies is likely to be affected by the characteristics of the industry, and this includes decisions about cooperation or sourcing. The objective of this thesis is to provide a holistic view on factors affecting decisions about offshore sourcing in software development. Offshore sourcing refers to a cooperative mode of offshoring, where a firm does not establish its own presence in a foreign country, but utilises a local supplier. The study examines product development activities that are distributed across organisational and geographical boundaries. The objective can be divided into two subtopics: general reasons for international cooperation in product development and particular reasons for cooperation between Finnish and Russian companies. The focus is on the strategic rationale at the company level, in particular in small and medium-sized enterprises. The theoretical discourse of the study builds upon the literature on international cooperation and networking, with particular focus on cooperation with foreign suppliers and within product development activities. The resource-based view is also discussed, as heterogeneity and interdependency of the resources possessed by different firms are seen as factors motivating international cooperation. Strategically, sourcing can be used to access resources possessed by an industrial network, to enhance the product development of a firm, or to optimise its cost structure. In order to investigate the issues raised by the theoretical review, two empirical studies on international cooperation in software product development have been conducted. The emphasis of the empirical part of the study is on cooperation between Finnish and Russian companies. The data has been gathered through four case studies on Finnish software development organisations and four case studies on Russian offshore suppliers. Based on the material from the case studies, a framework clarifying and grouping the factors that influence offshore sourcing decisions has been built. The findings indicate that decisions regarding offshore sourcing in software development are far more complex than generally assumed. The framework provides a holistic view on factors affecting decisions about offshore sourcing in software development, capturing the multidimensionality of motives for entering offshore cooperation. Four groups of factors emerged from the data: A) strategy-related aspects, B) aspects related to resources and capabilities, C) organisation-related aspects, and D) aspects related to the entrepreneur or management. By developing a holistic framework of decision factors, the research offers in-depth theoreticalunderstanding of offshore sourcing rationale in product development. From the managerial point of view, the proposed framework sums up the issues that a firm should pay attention to when contemplating product development cooperation with foreign suppliers. Understanding different components of sourcing decisions can lead to improved preconditions for strategising and engaging in offshore cooperation. A thorough decisionmaking process should consider all the possible benefits and risks of product development cooperation carefully.
Resumo:
The objective of the thesis is to enhance the understanding about the management of the front end phases of the innovation process in a networked environment. The thesis approaches the front end of innovation from three perspectives, including the strategy, processes and systems of innovation. The purpose of the use of different perspectives in the thesis is that of providing an extensive systemic view of the front end, and uncovering the complex nature of innovation management. The context of the research is the networked operating environment of firms. The unit of analysis is the firm itself or its innovation processes, which means that this research approaches the innovation networks from the point of view of a firm. The strategy perspective of the thesis emphasises the importance of purposeful innovation management, the innovation strategy of firms. The role of innovation processes is critical in carrying out innovation strategies in practice, supporting the development of organizational routines for innovation, and driving the strategic renewal of companies. The primary focus of the thesis from systems perspective is on idea management systems, which are defined as a part of innovation management systems, and defined for this thesis as any working combination of methodology and tools (manual or IT-supported) that enhance the management of innovations within their early phases. The main contribution of the thesis are the managerial frameworks developed for managing the front end of innovation, which purposefully “wire” the front end of innovation into the strategy and business processes of a firm. The thesis contributes to modern innovation management by connecting the internal and external collaboration networks as foundational elements for successful management of the early phases of innovation processes in a dynamic environment. The innovation capability of a firm is largely defined by its ability to rely on and make use of internal and external collaboration already during the front end activities, which by definition include opportunity identification and analysis, idea generation, profileration and selection, and concept definition. More specifically, coordination of the interfaces between these activities, and between the internal and external innovation environments of a firm is emphasised. The role of information systems, in particular idea management systems, is to support and delineate the innovation-oriented behaviour and interaction of individuals and organizations during front end activities. The findings and frameworks developed in the thesis can be used by companies for purposeful promotion of their front end processes. The thesis provides a systemic strategy framework for managing the front end of innovation – not as a separate process, but as an elemental bundle ofactivities that is closely linked to the overall innovation process and strategy of a firm in a distributed environment. The theoretical contribution of the thesis relies on the advancement of the open innovation paradigm in the strategic context of a firm within its internal and external innovation environments. This thesis applies the constructive research approach and case study methodology to provide theoretically significant results, which are also practically beneficial.
Resumo:
The Internet has transformed the scope, boundaries and dynamics of social and economic interactions. It is argued to have broadened the notion of the community from physical, colocated groups towards collectives that are able to transcend time and space, i.e. virtual communities. Even if virtual communities have been on the academic agenda for a couple of decades, there is still surprisingly little research on knowledge sharing within them. In addition, prior research has largely neglected the complex dynamics between Internet-based communication channels and the surrounding communities in which they are embedded. This thesis aims at building a better understanding of knowledge sharing supported by conversational technologies in intra-organisational virtual communities and external virtual communities supporting relationships with customers. The focus is thus on knowledge sharing in types of virtual communities that seem to be of relevance to business organisations. The study consists of two parts. The first part introduces the research topic and discusses the overall results. The second part comprises seven research publications. Qualitative research methods are used throughout the study. The results of the study indicate that investigation of the processes of knowledge sharing in virtual communities requires a socio-technical perspective, combining the individual, social and technological levels, and understanding the interplay between them. It is claimed that collective knowledge in virtual communities creates the enabling structure for knowledge sharing, and forms the invisible structure of the community on the basis of which it operates. It consists of a shared context, social capital and a unique community culture. The Internet does not inevitably erode social interaction: it seems that supporting social relationships by means of communication technology is a matter of quantity rather than quality. In order to provide access to external knowledge and expertise, firms need to open themselves up to an array of Internet-based conversations, and to consider the relevance of virtual communities to their businesses.
Resumo:
This dissertation is based on four articles dealing with modeling of ozonation. The literature part of this considers some models for hydrodynamics in bubble column simulation. A literature review of methods for obtaining mass transfer coefficients is presented. The methods presented to obtain mass transfer are general models and can be applied to any gas-liquid system. Ozonation reaction models and methods for obtaining stoichiometric coefficients and reaction rate coefficients for ozonation reactions are discussed in the final section of the literature part. In the first article, ozone gas-liquid mass transfer into water in a bubble column was investigated for different pH values. A more general method for estimation of mass transfer and Henry’s coefficient was developed from the Beltrán method. The ozone volumetric mass transfer coefficient and the Henry’s coefficient were determined simultaneously by parameter estimation using a nonlinear optimization method. A minor dependence of the Henry’s law constant on pH was detected at the pH range 4 - 9. In the second article, a new method using the axial dispersion model for estimation of ozone self-decomposition kinetics in a semi-batch bubble column reactor was developed. The reaction rate coefficients for literature equations of ozone decomposition and the gas phase dispersion coefficient were estimated and compared with the literature data. The reaction order in the pH range 7-10 with respect to ozone 1.12 and 0.51 the hydroxyl ion were obtained, which is in good agreement with literature. The model parameters were determined by parameter estimation using a nonlinear optimization method. Sensitivity analysis was conducted using object function method to obtain information about the reliability and identifiability of the estimated parameters. In the third article, the reaction rate coefficients and the stoichiometric coefficients in the reaction of ozone with the model component p-nitrophenol were estimated at low pH of water using nonlinear optimization. A novel method for estimation of multireaction model parameters in ozonation was developed. In this method the concentration of unknown intermediate compounds is presented as a residual COD (chemical oxygen demand) calculated from the measured COD and the theoretical COD for the known species. The decomposition rate of p-nitrophenol on the pathway producing hydroquinone was found to be about two times faster than the p-nitrophenol decomposition rate on the pathway producing 4- nitrocatechol. In the fourth article, the reaction kinetics of p-nitrophenol ozonation was studied in a bubble column at pH 2. Using the new reaction kinetic model presented in the previous article, the reaction kinetic parameters, rate coefficients, and stoichiometric coefficients as well as the mass transfer coefficient were estimated with nonlinear estimation. The decomposition rate of pnitrophenol was found to be equal both on the pathway producing hydroquinone and on the path way producing 4-nitrocathecol. Comparison of the rate coefficients with the case at initial pH 5 indicates that the p-nitrophenol degradation producing 4- nitrocathecol is more selective towards molecular ozone than the reaction producing hydroquinone. The identifiability and reliability of the estimated parameters were analyzed with the Marcov chain Monte Carlo (MCMC) method. @All rights reserved. No part of the publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.