909 resultados para Business Intelligence,Data Warehouse,Sistemi Informativi
Resumo:
The newsworthiness of an event is partly determined by how unusual it isand this paper investigates the business cycle implications of this fact. In particular, weanalyze the consequences of information structures in which some types of signals are morelikely to be observed after unusual events. Such signals may increase both uncertainty anddisagreement among agents and when embedded in a simple business cycle model, can helpus understand why we observe (i) occasional large changes in macro economic aggregatevariables without a correspondingly large change in underlying fundamentals (ii) persistentperiods of high macroeconomic volatility and (iii) a positive correlation between absolutechanges in macro variables and the cross-sectional dispersion of expectations as measuredby survey data. These results are consequences of optimal updating by agents when theavailability of some signals is positively correlated with tail-events. The model is estimatedby likelihood based methods using individual survey responses and a quarterly time seriesof total factor productivity along with standard aggregate time series. The estimated modelsuggests that there have been episodes in recent US history when the impact on outputof innovations to productivity of a given magnitude was more than eight times as largecompared to other times.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The U.S. Environmental Protection Agency (EPA), the Alcoa – Davenport Works Facility (Alcoa), and concerned citizens and community leaders of Riverdale, Iowa requested the Iowa Department of Public Health (IDPH) Hazardous Waste Site Health Assessment Program to evaluate the health impacts of exposures to volatile organic vapors detected within residences located immediately to the west of the Alcoa property. This health consultation addresses inhalation exposure to individuals that may have occupied the currently vacant residences in which the air sampling was completed.
Resumo:
Medicine counterfeiting is a crime that has increased in recent years and now involves the whole world. Health and economic repercussions have led pharmaceutical industries and agencies to develop many measures to protect genuine medicines and differentiate them from counterfeits. Detecting counterfeit is chemically relatively simple for the specialists, but much more information can be gained from the analyses in a forensic intelligence perspective. Analytical data can feed criminal investigation and law enforcement by detecting and understanding the criminal phenomenon. Profiling seizures using chemical and packaging data constitutes a strong way to detect organised production and industrialised forms of criminality, and is the focus of this paper. Thirty-three seizures of a commonly counterfeited type of capsule have been studied. The results of the packaging and chemical analyses were gathered within an organised database. Strong linkage was found between the seizures at the different production steps, indicating the presence of a main counterfeit network dominating the market. The interpretation of the links with circumstantial data provided information about the production and the distribution of counterfeits coming from this network. This forensic intelligence perspective has the potential to be generalised to other types of products. This may be the only reliable approach to help the understanding of the organised crime phenomenon behind counterfeiting and to enable efficient strategic and operational decision making in an attempt to dismantle counterfeit network.
Resumo:
This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile.
Resumo:
Tutkimuksen tavoitteena on tutkia telekommunikaatioalalla toimivan kohdeyrityksen ohjelmistojen toimitusprosessia° Tutkimus keskittyy mallintamaan toimitusprosessin, määrittelemään roolit ja vastuualueet, havaitsemaan ongelmakohdat ja ehdottamaan prosessille kehityskohteita. Näitä tavoitteita tarkastellaan teoreettisten prosessimallinnustekniikoiden ja tietojohtamisen SECI-prosessikehyksen läpi. Tärkein tiedonkeruun lähde oli haastatteluihin perustuva tutkimus, johon osallistuvat kaikki kohdeprosessiin kuuluvat yksiköt. Mallinnettu toimitusprosessi antoi kohdeyritykselle paremman käsityksen tarkasteltavasta prosessista ja siinä toimivien yksiköiden rooleistaja vastuualueista. Parannusehdotuksia olivat tiedonjaon kanavoinnin määritteleminen, luottamuksen ja sosiaalisten verkostojen parantaminen, ja tietojohtamisen laajamittainen implementointi.
Resumo:
The production and use of false identity and travel documents in organized crime represent a serious and evolving threat. However, a case-by-case perspective, thus suffering from linkage blindness and a limited analysis capacity, essentially drives the present-day fight against this criminal problem. To assist in overcoming these limitations, a process model was developed using a forensic perspective. It guides the systematic analysis and management of seized false documents to generate forensic intelligence that supports strategic and tactical decision-making in an intelligence-led policing approach. The model is articulated on a three-level architecture that aims to assist in detecting and following-up on general trends, production methods and links between cases or series. Using analyses of a large dataset of counterfeit and forged identity and travel documents, it is possible to illustrate the model, its three levels and their contribution. Examples will point out how the proposed approach assists in detecting emerging trends, in evaluating the black market's degree of structure, in uncovering criminal networks, in monitoring the quality of false documents, and in identifying their weaknesses to orient the conception of more secured travel and identity documents. The process model proposed is thought to have a general application in forensic science and can readily be transposed to other fields of study.
Resumo:
Pro gradu -tutkielman tavoitteena oli selvittää millaisia muutoksia uusi SAP-pohjainen järjestelmä aiheuttaa metsäteollisuusyrityksen hankinta-prosesseissa ja ostajien työssä. Tilannetta tarkasteltiin myös liiketoiminta-prosessien uudistamisprojektina. Tutkimus oli kvalitatiivinen case-tutkimus, jonka lähteinä olivat haastattelut ja prosessikuvaukset. Hankintaprosessit on pyritty standardisoimaan ja kuvaamaan tarkasti, koska järjestelmä on tarkoitus ottaa vähitellen käyttöön yrityksen kaikissa toimipisteissä. Teoriaosassa käsiteltiin globaalia hankintaa, erilaisia tilauksia, sähköistä liiketoimintaa ja uudistettujen liiketoimintaprosessien käyttöönottoa sekä siihen liittyviä haasteita. Yritys pyrkii kehittämään hankintatoimintaansa ja hyödyntämään kokonsa tuomia mittakaavaetuja, uusi järjestelmä on merkittävä osa tätä kehitystyötä. Haastattelujen perusteella uusi järjestelmä on toivottu ja siihen kohdistuu paljon odotuksia. Järjestelmän käyttöönotto tulee olemaanhaastava tehtävä, koska järjestelmän käyttäjiä on paljon ja loppukäyttäjät tekevät entistä enemmän tapahtumia järjestelmään tilausaloitteiden ja kotiinkutsujenmuodossa. Tehdasostajien roolissa tapahtuu muutoksia, rutiinitilaamisen vähentyessä he toimivat tietoa molempiin suuntiin jakavina linkkeinä keskitetyn hankinta-organisaation ja tehtaan välissä.
Resumo:
Forensic intelligence is a distinct dimension of forensic science. Forensic intelligence processes have mostly been developed to address either a specific type of trace or a specific problem. Even though these empirical developments have led to successes, they are trace-specific in nature and contribute to the generation of silos which hamper the establishment of a more general and transversal model. Forensic intelligence has shown some important perspectives but more general developments are required to address persistent challenges. This will ensure the progress of the discipline as well as its widespread implementation in the future. This paper demonstrates that the description of forensic intelligence processes, their architectures, and the methods for building them can, at a certain level, be abstracted from the type of traces considered. A comparative analysis is made between two forensic intelligence approaches developed independently in Australia and in Europe regarding the monitoring of apparently very different kind of problems: illicit drugs and false identity documents. An inductive effort is pursued to identify similarities and to outline a general model. Besides breaking barriers between apparently separate fields of study in forensic science and intelligence, this transversal model would assist in defining forensic intelligence, its role and place in policing, and in identifying its contributions and limitations. The model will facilitate the paradigm shift from the current case-by-case reactive attitude towards a proactive approach by serving as a guideline for the use of forensic case data in an intelligence-led perspective. A follow-up article will specifically address issues related to comparison processes, decision points and organisational issues regarding forensic intelligence (part II).
Resumo:
Tutkimuksen tavoitteena oli tunnistaa tutkittavan yrityksen (Starlike Oy) avainasiakassuhteiden tila sekä luoda strategioita näiden asiakassuhteiden kehittämiseksi. Asiakassuhteiden tilaa pyrittiin analysoimaan tarkastelemalla kolmea suhteen osatekijää; ilmapiiriä, transaktiohistoriaa ja toimijoita. Vastaavasti strategioiden luonnintukena käytettiin asiakassuhteen osatekijöiden painoarvo- sekä asiakassuhteen ongelmat/tarpeet -mallia. Tutkimusmenetelmänä käytettiin kyselytutkimusta, jonka aineisto kerättiin sähköpostikyselyllä syksyn 2006 aikana niin asiakkailta kuin yritykseltäkin. Asiakassuhteiden havaittiin jakautuvan viidelle tasolle jakauman noudattaessa normaalijakaumaa. Asiakkaan ja myyjän näkemyksissä asiakassuhteen nykytilasta ei havaittu olevan järjestelmällistä eroa. Toisaalta myyjän havaittiin asettavan asiakassuhteen tavoitteet järjestelmällisesti asiakasta korkeammalle. Asiakassuhteiden tavoitteiden havaittiin lisäksi jakautuvan asiakassuhteen tasoittain kahteen osaan siten, että noin puolet asiakkaista halusi kehittää asiakassuhdettaan ja noin puolet oli tyytyväisiä sen nykytilaan.
Resumo:
Tutkimuksen tavoitteena oli selvittää, kuinka yrityksen Compeitive Intelligence (CI) yksikön palvelustrategia pitäisi rakentaa ja mitä tekijöitä ottaa huomioon palvelustrategian rakentamisessa, jotta se parhaiten tukisi CI-yksikön integrointia muuhun organisaatioon. Tutkimus suoritettiin laadullisena case tutkimuksena. Tutkimuksen aineisto kerättiin kirjallisuudesta, artikkeleista sekä suorittamalla teemahaastatteluja. Tutkimuksen teoreettisena pohjana käytettiin näkemyksiä markkina-, kilpailu- ja liiketoimintatiedosta, palveluiden markkinoinnista sekä tietojohtamisesta. Tutkimuksen empiirinen aineisto kerättiin haastattelemalla kymmenen CI-yksikön asiakasta kohdeyrityksestä. Niin teorialähteiden kuin kohdeyrityksessä kerätyn materiaalin pohjalta voidaan sanoa, että palvelustrategiaa suunniteltaessa oleellisia elementtejä, joilla voidaan parantaa CI:n integrointia muuhun organisaatioon, ovat palvelu/tuote, jakelu sekä viestintä ja vuorovaikutus. Palvelustrategian rakentamisessa erityisessä ja keskeisessä asemassa on vuorovaikutus. Vuorovaikutuksen merkitys korostuu, koska sillä on vaikutusta palvelustrategian muihin elementteihin, kuten tiedon laatuun ja proaktiiviseen tiedon jakeluun.
Resumo:
Tutkimus tarkastelee vaihtoehtoisia termiinisuojaustrategioita metsäteollisuuden alan tulosyksikössä. Jälkitestauksen tarkoituksena on arvioida vaihtoehtoisten strategioiden tuloksellisuutta suojata case-yrityksen kassavirtoja seuraavan kolmen arviointikriteerin avulla: yksittäisten vieraan valuutan määräisten kassavirtojen vaihtelu; koko vieraan valuutan määräisen kassavirran vaihtelu; suojausvoitot ja -tappiot. Tutkimuksen teoreettinen viitekehys tarkastelee yrityksen päätöksentekoa, valuuttariskien suojausprosessia sekä esittelee yrityksen vaihtoehtoisia suojausstrategioita. Tutkimuksen empiirinen aineisto pohjautuu case- yrityksen historiallisiin myyntilukuihin ja on kerätty yrityksen tietojärjestelmästä. Muu tutkimuksessa käytetty dataon kerätty eri tietokannoista. Tutkimuksen tulokset osoittavat, että suojaaminen vähentää kassavirtojen vaihtelua. Suojaamisen taloudelliset tulokset ovat kuitenkin erittäin riippuvaisia valitusta suojausstrategiasta, joka voi johtaa merkittäviin suojausvoittoihin, mutta yhtä hyvin myos merkittäviin tappioihin. Johdon näkemykset ja riskitoleranssi määrittelevät mitä strategiaa yrityksessä tullaan viime kädessä noudattamaan.
Resumo:
Pro gradu -tutkielman tavoitteena on selvittää, mikä on luottamuksen rooli B2B-asiakassuhteessa. Mitkä ovat B2B-suhteen ominaispiirteet, mikä on luottamuksen rooli ja luonne ja mikä on luottamuksen dynamiikka B2B-asiakassuhteessa. Tavoitteisiin on pyritty laadullisen tutkimuksen avulla. Aineisto kerättiin haastatteluilla ja analysointiin manuaalisesti teemoittain. Tutkimuksen tulokset osoittavat, että B2B-asiakassuhde on vaativa yhteistyömuoto, joka tarjoaa molemmille osapuolille hyötyjä sekä mahdollisuuksia kehittyä ja menestyä. Luottamus on suhteen ja menestyksellisen yhteistyön perusedellytys. Se perustuu hyvään mainee-seen, yhteiseen historiaan ja kokemuksiin ja sitä tarvitaan erityisesti viestinnässä, oppimisessa ja ongelmanratkaisussa. Henkilökohtaisten kontaktien ja partnereiden välisen henkilökemian lisäksi tehokkaimmat tavat rakentaa luottamusta ovat lupausten pitäminen jaerinomainen päivittäinen liiketoiminta asiakkaan kanssa.
Resumo:
VVALOSADE is a research project of professor Anita Lukka's VALORE research team in the Lappeenranta University of Technology. The VALOSADE includes the ELO technology program of Tekes. SMILE is one of four subprojects of the VALOSADE. The SMILE study focuses on the case of the company network that is composed of small and micro-sized mechanical maintenance service providers and forest industry as large-scale customers. The basic principle of the SMILE study is the communication and ebusiness in supply and demand networks. The aim of the study is to develop ebusiness strategy, ebusiness model and e-processes among the SME local service providers, and onthe other hand, between the local service provider network and the forest industry customers in a maintenance and operations service business. A literature review, interviews and benchmarking are used as research methods in this qualitative case study. The first SMILE report, 'Ebusiness between Global Company and Its Local SME Supplier Network', concentrated on creating background for the SMILE study by studying general trends of ebusiness in supply chains and networks of different industries. This second phase of the study concentrates on case network background, such as business relationships, information systems and business objectives; core processes in maintenance and operations service network; development needs in communication among the network participants; and ICT solutions to respond needs in changing environment. In the theory part of the report, different ebusiness models and frameworks are introduced. Those models and frameworks are compared to empirical case data. From that analysis of the empirical data, therecommendations for the development of the network information system are derived. In process industry such as the forest industry, it is crucial to achieve a high level of operational efficiency and reliability, which sets up great requirements for maintenance and operations. Therefore, partnerships or strategic alliances are needed between the network participants. In partnerships and alliances, deep communication is important, and therefore the information systems in the network also are critical. Communication, coordination and collaboration will increase in the case network in the future, because network resources must be optimised to improve competitive capability of the forest industry customers and theefficiency of their service providers. At present, ebusiness systems are not usual in this maintenance network. A network information system among the forest industry customers and their local service providers actually is the only genuinenetwork information system in this total network. However, the utilisation of that system has been quite insignificant. The current system does not add value enough either to the customers or to the local service providers. At present, thenetwork information system is the infomediary that share static information forthe network partners. The network information system should be the transaction intermediary, which integrates internal processes of the network companies; the network information system, which provides common standardised processes for thelocal service providers; and the infomediary, which share static and dynamic information on right time, on right partner, on right costs, on right format and on right quality. This study provides recommendations how to develop this system in the future to add value to the network companies. Ebusiness scenarios, vision, objectives, strategies, application architecture, ebusiness model, core processes and development strategy must be considered when the network information system will be developed in the next development step. The core processes in the case network are demand/capacity management, customer/supplier relationship management, service delivery management, knowledge management and cash flow management. Most benefits from ebusiness solutions come from the electrifying of operational level processes, such as service delivery management and cash flow management.
Resumo:
The use by police services and inquiring agencies of forensic data in an intelligence perspective is still fragmentary and to some extent ignored. In order to increase the efficiency of criminal investigation to target illegal drug trafficking organisations and to provide valuable information about their methods, it is necessary to include and interpret objective drug analysis results already during the investigation phase. The value of visual, physical and chemical data of seized ecstasy tablets, as a support for criminal investigation on a strategic and tactical level has been investigated. In a first phase different characteristics of ecstasy tablets have been studied in order to define their relevance, variation, correlation and discriminating power in an intelligence perspective. During 5 years, over 1200 cases of ecstasy seizures (concerning about 150000 seized tablets) coming from different regions of Switzerland (City and Canton of Zurich, Cantons Ticino, Neuchâtel and Geneva) have been systematically recorded. This turned out to be a statistically representative database including large and small cases. During the second phase various comparison and clustering methods have been tested and evaluated, on the type and relevance of tablet characteristics, thus increasing knowledge about synthetic drugs, their manufacturing and trafficking. Finally analytical methodologies have been investigated and formalised, applying traditional intelligence methods. In this context classical tools, which are used in criminal analysis (like the I2 Analyst Notebook, I2 Ibase, ?) have been tested and adapted to address the specific need of forensic drug intelligence. The interpretation of these links provides valuable information about criminal organisations and their trafficking methods. In the final part of this thesis practical examples illustrate the use and value of such information.