100 resultados para Network Analysis Methods
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
Väestön ikääntyminen pakottaa yhteiskunnan ja julkisen terveydenhuollon muutoksiin. Jotta ikääntyvien ihmisten kotona asuminen voidaan mahdollistaa, palvelujärjestelmän pitää mukautua muuttuvaan tilanteeseen. Tämän diplomityön tarkoituksena on tunnistaa asiakaslähtöisiä lähellä asiakasta tarjottavia palvelukokonaisuuksia. Tutkimuksen teoreettinen viitekehys muodostuu asiakasarvon luomisesta ja palvelutarjoamista. Tarkasteluryhmänä on Etelä-Karjalan alueen 60–90-vuotiaat ja käytetty aineisto on kerätty vastaajilta postitse lähetetyllä kyselyllä. Tutkimus on eksploratiivinen ja tulosten tulkinnassa on hyödynnetty määrällisen tutkimuksen ja verkostoanalyysin menetelmiä. Työn keskeisimmät tulokset ovat tunnistetut asiakassegmentit ja heidän tarpeidensa pohjalta muodostetut palvelupaketit. Tulokset indikoivat asiakkaiden tarpeita ja tuloksia on analysoitu myös tuottajan näkökulmasta. Empiiristen tulosten lisäksi teoriaviitekehystä on kehitetty eteenpäin, jotta palvelukeskeiset teoriat voidaan ymmärtää yritysten näkökulman lisäksi asiakkaan näkökulmasta.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
This research is an analysis of the value and content of local service offerings that enable longer periods of living at home for elderly people. Mobile health care and new distribution services have provided an interesting solution in this context. The research aim to shed light on the research question, ‘How do we bundle services based on different customer needs?’ A research process consisting of three main phases was applied for this purpose. During this process, elderly customers were segmented, the importance of services was rated and service offerings were defined. Value creation and service offering provides theoretical framework for the research. The target group is South Karelia’s 60 to 90-year old individuals and the data has been acquired via a postal questionnaire. Research has been conducted as exploratory research utilizing the methods of quantitative and social network analysis. The main results of the report are identified customer segments and service packages that fits to the segments’ needs. The results indicate the needs of customers and the results are additionally analysed from the producer’s point of view. In addition to the empirical results, the used theory framework has been developed further in order for the service-related theories to be seen from the customer’s point of view and not just from the producer’s point of view.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
In the design of electrical machines, efficiency improvements have become very important. However, there are at least two significant cases in which the compactness of electrical machines is critical and the tolerance of extremely high losses is valued: vehicle traction, where very high torque density is desired at least temporarily; and direct-drive wind turbine generators, whose mass should be acceptably low. As ever higher torque density and ever more compact electrical machines are developed for these purposes, thermal issues, i.e. avoidance of over-temperatures and damage in conditions of high heat losses, are becoming of utmost importance. The excessive temperatures of critical machine components, such as insulation and permanent magnets, easily cause failures of the whole electrical equipment. In electrical machines with excitation systems based on permanent magnets, special attention must be paid to the rotor temperature because of the temperature-sensitive properties of permanent magnets. The allowable temperature of NdFeB magnets is usually significantly less than 150 ˚C. The practical problem is that the part of the machine where the permanent magnets are located should stay cooler than the copper windings, which can easily tolerate temperatures of 155 ˚C or 180 ˚C. Therefore, new cooling solutions should be developed in order to cool permanent magnet electrical machines with high torque density and because of it with high concentrated losses in stators. In this doctoral dissertation, direct and indirect liquid cooling techniques for permanent magnet synchronous electrical machines (PMSM) with high torque density are presented and discussed. The aim of this research is to analyse thermal behaviours of the machines using the most applicable and accurate thermal analysis methods and to propose new, practical machine designs based on these analyses. The Computational Fluid Dynamics (CFD) thermal simulations of the heat transfer inside the machines and lumped parameter thermal network (LPTN) simulations both presented herein are used for the analyses. Detailed descriptions of the simulated thermal models are also presented. Most of the theoretical considerations and simulations have been verified via experimental measurements on a copper tooth-coil (motorette) and on various prototypes of electrical machines. The indirect liquid cooling systems of a 100 kW axial flux (AF) PMSM and a 110 kW radial flux (RF) PMSM are analysed here by means of simplified 3D CFD conjugate thermal models of the parts of both machines. In terms of results, a significant temperature drop of 40 ̊C in the stator winding and 28 ̊C in the rotor of the AF PMSM was achieved with the addition of highly thermally conductive materials into the machine: copper bars inserted in the teeth, and potting material around the end windings. In the RF PMSM, the potting material resulted in a temperature decrease of 6 ̊C in the stator winding, and in a decrease of 10 ̊C in the rotor embedded-permanentmagnets. Two types of unique direct liquid cooling systems for low power machines are analysed herein to demonstrate the effectiveness of the cooling systems in conditions of highly concentrated heat losses. LPTN analysis and CFD thermal analysis (the latter being particularly useful for unique design) were applied to simulate the temperature distribution within the machine models. Oil-immersion cooling provided good cooling capability for a 26.6 kW PMSM of a hybrid vehicle. A direct liquid cooling system for the copper winding with inner stainless steel tubes was designed for an 8 MW directdrive PM synchronous generator. The design principles of this cooling solution are described in detail in this thesis. The thermal analyses demonstrate that the stator winding and the rotor magnet temperatures are kept significantly below their critical temperatures with demineralized water flow. A comparison study of the coolant agents indicates that propylene glycol is more effective than ethylene glycol in arctic conditions.
Resumo:
Tämän tutkimuksen tarkoituksena oli tunnistaa organisaation sisäisiin tietäysverkostoihin ja työntekijöiden verkostorooleihin vaikuttavia tekijöitä. Tutkimusongelmaa tarkasteltiin tietojohtamisen tietoperustaisen näkemyksen, sosiaalisen pääoman ja verkostotutkimuksen teoreettisesta viitekehyksestä. Tutkimus toteutettiin tapaustutkimuksena suomalaisessa teollisuusorganisaatiossa. Tutkimuksen empiirisessä osassa käytettiin sekä kvantitatiivista että kvalitatiivista tutkimusmenetelmää. Kvantitatiivinen tutkimusaineisto kerättiin strukturoidulla kyselylomakkeella ja analysoitiin sosiaalisella verkostoanalyysillä. Kvalitatiivinen tutkimusaineisto kerättiin haastatteluilla ja analysoitiin abduktiivisesti sisällönanalyysimenetelmällä. Tutkimuksen tulosten mukaan tietämysverkostoihin ja verkostorooleihin vaikuttavat sekä ulkoiset että sisäiset tekijät. Ulkoisia tekijöitä ovat ympäristöön ja olosuhteisiin vaikuttavat tekijät. Sisäisiä tekijöitä ovat puolestaan henkilön luonteenpiirteet, osaaminen, motivaatio sekä tietämys. Tämän tutkimuksen tulosten mukaan sisäiset tekijät selittävät työntekijöiden välisiä eroja. Työntekijöiden käyttäytymiseen, motivaatioon, asenteisiin ja osaamiseen voidaan vaikuttaa henkilöstöjohtamisen menetelmillä. Ihmisen persoonallisuus sen sijaan pysyy suhteellisen muuttumattomana. Tietojohtamisen, tietämysverkostojen ja verkostoroolien aikaisemmissa tutkimuksissa ei ole kuitenkaan tarkasteltu persoonallisuuden piirteiden vaikutusta tietämyksen siirtämiseen.
Resumo:
Tutkielma käyttää automaattista kuviontunnistusalgoritmia ja yleisiä kahden liukuvan keskiarvon leikkauspiste –sääntöjä selittääkseen Stuttgartin pörssissä toimivien yksityissijoittajien myynti-osto –epätasapainoa ja siten vastatakseen kysymykseen ”käyttävätkö yksityissijoittajat teknisen analyysin menetelmiä kaupankäyntipäätöstensä perustana?” Perusolettama sijoittajien käyttäytymisestä ja teknisen analyysin tuottavuudesta tehtyjen tutkimusten perusteella oli, että yksityissijoittajat käyttäisivät teknisen analyysin metodeja. Empiirinen tutkimus, jonka aineistona on DAX30 yhtiöiden data vuosilta 2009 – 2013, ei tuottanut riittävän selkeää vastausta tutkimuskysymykseen. Heikko todistusaineisto näyttää kuitenkin osoittavan, että yksityissijoittajat muuttavat kaupankäyntikäyttäytymistänsä eräiden kuvioiden ja leikkauspistesääntöjen ohjastamaan suuntaan.
Resumo:
This study presents an understanding of how a U.S. based, international MBA school has been able to achieve competitive advantage within a relatively short period of time. A framework is built to comprehend how the dynamic capability and value co-creation theories are connected and to understand how the dynamic capabilities have enabled value co-creation to happen between the school and its students, leading to such competitive advantage for the school. The data collection method followed a qualitative single-case study with a process perspective. Seven semi-structured interviews were made in September and October of 2015; one current employee of the MBA school was interviewed, with the other six being graduates and/or former employees of the MBA school. In addition, the researcher has worked as a recruiter at the MBA school, enabling to build bridges and a coherent whole of the empirical findings. Data analysis was conducted by first identifying themes from interviews, after which a narrative was written and a causal network model was built. Thus, a combination of thematic analysis, narrative and grounded theory were used as data analysis methods. This study finds that value co-creation is enabled by the dynamic capabilities of the MBA school; also capabilities would not be dynamic if value co-creation did not take place. Thus, this study presents that even though the two theories represent different level analyses, they are intertwined and together they can help to explain competitive advantage. The MBA case school’s dynamic capabilities are identified to be the sales & marketing capabilities and international market creation capabilities, thus the study finds that the MBA school does not only co-create value with existing students (customers) in the school setting, but instead, most of the value co-creation happens between the school and the student cohorts (network) already in the recruiting phase. Therefore, as a theoretical implication, the network should be considered as part of the context. The main value created seem to lie in the MBA case school’s international setting & networks. MBA schools around the world can learn from this study; schools should try to find their own niche and specialize, based on their own values and capabilities. With a differentiating focus and a unique and practical content, the schools can and should be well-marketed and proactively sold in order to receive more student applications and enhance competitive advantage. Even though an MBA school can effectively be treated as a business, as the study shows, the main emphasis should still be on providing quality education. Good content with efficient marketing can be the winning combination for an MBA school.
Resumo:
The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.
Resumo:
IT outsourcing (ITO) refers to the shift of IT/IS activities from internal to external of an organization. In prior research, the governance of ITO is recognized with persistent strategic importance for practice, because it is tightly related to ITO success. Under the rapid transformation of global market, the evolving practice of ITO requires updated knowledge on effective governance. However, research on ITO governance is still under developed due to the lack of integrated theoretical frameworks and the variety of empirical settings besides dyadic client-vendor relationships. Especially, as multi-sourcing has become an increasingly common practice in ITO, its new governance challenges must be attended by both ITO researchers and practitioners. To address this research gap, this study aims to understand multi-sourcing governance with an integrated theoretical framework incorporating both governance structure and governance mechanisms. The focus is on the emerging deviations among formal, perceived and practiced governance. With an interpretive perspective, a single case study is conducted with mixed methods of Social Network Analysis (SNA) and qualitative inquiries. The empirical setting embraces one client firm and its two IT suppliers for IT infrastructure services. The empirical material is analyzed at three levels: within one supplier firm, between the client and one supplier, and among all three firms. Empirical evidences, at all levels, illustrate various deviations in governance mechanisms, with which emerging governance structures are shaped. This dissertation contributes to the understanding of ITO governance in three domains: the governance of ITO in general, the governance of multi-sourcing in particular, and research methodology. For ITO governance in general, this study has identified two research strands of governance structure and governance mechanisms, and integrated both concepts under a unified framework. The composition of four research papers contributes to multi-sourcing research by illustrating the benefits of zooming in and out across the multilateral relationships with different aspects and scopes. Methodologically, the viability and benefit of mixed-method is illustrated and confirmed for both researchers and practitioners.
Resumo:
Abstract: Inverse decision analysis methods in enviromental decision-making - inverse methods
Resumo:
Työn tavoite onharmonisoida yhtenäiset rakenteet UPM:n paperi- ja sellutehtaiden merkittävilleympäristönäkökohdille sekä niiden ympäristöriskienhallintajärjestelmille. Näin saavutetaan yhteneväiset tavoitteet ja analysointikeinot yrityksen yksiköille. Harmonisointiprosessi on osa koko yrityksen ympäristöhallintajärjestelmän kehittämistä. Ja konsernin EMS -prosessi puolestaan konvergoi konsernin integroidun johtamisjärjestelmän kehitystä. Lisäksi työn tapaustutkimuksessa selvitettiin riskienhallintajärjestelmien integroitumispotentiaalia. Sen avulla saavutettaisiin paremmin suuren yrityksen synergia-etuja ja vuorovaikutteisuutta toimijoiden kesken, sekä parannettaisiin riskienhallintajärjestelmän mukautuvuutta ja käytettävyyttä. Työssä käsitellään kolmea esimerkkiä, joiden pohjalta tehdään esitys harmonisoiduille merkittäville ympäristönäkökohdille sekä riskienhallintajärjestelmien parametreille. Tutkimusongelmaa lähestytään haastattelujen, kirjallisuuden, yrityksen PWC:llä teettämän selvityksen sekä omien päätelmien avulla. Lisäksi työssä esitetään ympäristöhallintajärjestelmän tehokkuuden todentaminen ympäristösuorituskyvyn muuttujiin suhteutettuna. Pohjana jatkuvan kehityksen päämäärälle on organisaatio-oppiminen, niin yksittäisen työntekijän, tiimien kuin eri yksiköiden kesken. Se antaa sysäyksen aineettoman omaisuuden, kuten ympäristö-osaamisen, hyödyntämiseen parhaalla mahdollisella tavalla. Tärkeimpinä lopputuloksina työssä ovat ehdotukset harmonisoiduille merkittäville ympäristönäkökohdille sekä ympäristöriskienhallintajärjestelmän määritetyille komponenteille. Niitä ovat määritelmät ja skaalat riskien todennäköisyydelle, seurauksille sekä riskiluokille. Työn viimeisenä osana luodaan pohja tapaustutkimuksen avulla Rauman tehtaan jätevedenpuhdistamon kahden erilaisen riskienhallintajärjestelmän integroitumiselle.
Resumo:
Tämän työn tarkoituksena oli tutkia pihkanhallintaa sulfaattisellutehtaalla ja uuteaineiden analysointimenetelmiä massasta ja suodoksesta. Tavoitteena oli kehittää uuteaineiden on-line mittausmenetelmä sulfaattisellutehtaalle. Kirjallisuusosassa esitettiin sulfaattisellun valmistusprosessi ja uuteaineiden käyttäytyminen prosessissa. Lisäksi työssä käsiteltiin tärkeimpiä pihkanhallintamenetelmiä ja uuteaineiden analysointia. Kokeellisessa osassa määritettiin uuteainepitoisuudet 29:stä massa- ja pesusuodosnäytteestä. Lisäksi suodosnäytteiden sameus ja kemiallinen hapenkulutus määritettiin laboratoriossa. On-line mittaukset tehtiin DD-pesurin paluusuodoksesta, josta mitattiin johtokyky, lämpötila, taitekerroin ja pH. Sameutta yritettiin mitata on-line nefelometrillä, mutta käytössä ollut laite ei soveltunut tumman suodoksen mittaamiseen. Työssä saatujen tulosten perusteella voidaan todeta, että pesusuodoksen ominaisuuksia mittaamalla ei voida arvioida massan uutepitoisuutta luotettavasti. Suodoksen ja massan uutepitoisuuksien välinen selitysaste oli 54%. Saippuoitumattomien uuteaineiden pitoisuuden ollessa erityisen suuri, suodoksen uutepitoisuus ei korreloi massassa olevan uutteen määrän kanssa. Täten kyseinen mittausmenetelmä ei ole sopiva, sillä se ei toimi silloin, kun olisi tärkeintä tietää oikea uuteainepitoisuus pihkanhallintakemikaalien annosten määrittämiseksi. Suodoksen sameus riippui sen uutepitoisuudesta 80% selitysasteella. Sameusmittaus on nopea ja suhteellisen luotettava menetelmä suodosten uutepitoisuuksien arviointiin, mutta se vaatii mittalaitteen joka käyttää usean ilmaisimen tekniikkaa. On-line-mittauksissa laitteisto tulisi varustaa automaattisella pesulaitteella, sillä mittarin linssit likaantuvat helposti prosessivirrassa.