96 resultados para Abstraction Hierarchy
Resumo:
Pro gradu -tutkielmassa selvitettiin organisaatiokulttuurin vaikutusta tiedon jakamiseen sekä tiedon jakamisen edistämistä asiantuntijaorganisaatiossa. Tavoitteena oli kehittää toimeksiannosta Fingrid Oyj:n tiedon jakamisen käytäntöjä sekä määrittää yhtiölle tavoitekulttuurin piirteet, joihin johto voi sitoutua ja jota kohti organisaatio voi kehittyä. Tutkimus toteutettiin pääosiltaan kvalitatiivisena tutkimuksena ja tutkimusotteena oli toimintatutkimus. Tutkimuksessa perehdyttiin aikaisempiin tiedon jakamista ja organisaatiokulttuurin yhteyttä selvittäneisiin tutkimuksiin. Case-yrityksen henkilöstön näkemykset nykyisestä organisaatiokulttuurista kartoitettiin hyödyntäen Cameron & Quinnin kilpailevien arvojen mallia. Pöytätutkimuksena tutustuttiin organisaation eri dokumentteihin, strategiaan, arvoihin ja ohjeisiin. Lisäksi toteutettiin 10 kpl teemahaastatteluita yhtiön organisaatiokulttuurista ja tiedon jakamisen edistämisen keinoista. Tulosten mukaan organisaatiokulttuurilla ja tiedon jakamisella on yhteys toisiinsa. Tälle löydettiin vahvistusta aikaisempien tutkimusten lisäksi myös case-yrityksen käytänteistä. Kulttuurit, joissa vuorovaikutus on avointa ja valtasuhteet matalia ja joissa kannustetaan kollektiiviseen tekemiseen yksilösuoritusten sijasta, suosivat tiedon jakamista tiedon panttaamisen sijasta. Case-yrityksen organisaatiokulttuurin dominoiviksi piirteiksi muodostuivat hierarkinen ja ryhmäkulttuuri: hierarkisuus näkyy yrityksen toiminnan ohjauksessa runsaina ohjeina ja sääntöinä, silti yrityksen ilmapiiri on epämuodollinen, organisaatiomalli on matala ja päätöksentekojärjestelmässä valtaa on jalkautettu alaspäin. Yrityksen kulttuurin todettiin tukevan tiedon jakamisen käytänteitä. Toimintatutkimuksessa Fingrid Oyj:lle määriteltiin yhdessä ylimmän johdon kanssa tavoitekulttuurin piirteet, linjattiin miten kulttuurin tulee näkyä esimiestyössä sekä laadittiin ehdotuksia tiedon jakamisen kehittämiseksi yhtiössä.
Resumo:
This thesis applies the customer value hierarchy model to forestry in order to determine strategic options to enhance the value of LiDAR technology in Russian forestry. The study is conducted as a qualitative case study with semi-structured interviews as a main source of the primary data. The customer value hierarchy model constitutes a theoretical base for the research. Secondary data incorporates information on forest resource management, LiDAR technology and Russian forestry. The model is operationalised using forestry literature and forms a basis for analyses of primary data. Analyses of primary data coupled with comprehension of Russian forest inventory system and knowledge on global forest inventory have led to conclusions on the forest inventory methods selection criteria and the organizations that would benefit the most from LiDAR technology use. The thesis recommends strategic options for LiDAR technology’s value enhancement in Russian forestry.
Resumo:
EU:n jätehierarkia asettaa jätteenkäsittelyssä materiaalien hyötykäytön energiahyötykäytön edelle. EU on asettanut korkeat tavoitteet jätteenkierrätykseen, 50 painoprosenttia kotitalousjätteestä on ohjattava kierrätykseen vuoteen 2020 mennessä. Suomessa kaatopaikoista on pyritty eroon lisäämällä jätteenpolttokapasiteettia. Jätteiden hyödyntämisen osalta tilanne Suomessa on hyvä, mutta kierrätystavoitteiden täyttyminen nykyisillä toimilla vaikuttaa epätodennäköiseltä. Tässä työssä selvitetään mitä mekaanisia jätteen erottelumenetelmiä maailmalla on käytössä ja kuinka tehokkaita ne ovat. Työn tavoitteena on tutkia voitaisiinko kierrätystä Suomessa tehostaa yhdyskuntajätteen mekaanisella käsittelyllä. Kirjallisuusselvityksen lisäksi työssä on simuloitu mekaanisia erotteluketjuja ja verrattu niillä saatuja tuloksia Suomen syntypaikkalajittelun tasoon. Tämän tutkimuksen perusteella, mikään yksittäinen mekaaninen erottelumenetelmä ei riittävän tehokas erottelemaan kierrätettäviä materiaaleja yhdyskuntajätteestä. Mekaanisia erottelumenetelmiä tulee yhdistää lajittelulinjastoiksi, joiden optimoiminen on monen tekijän summa. Lajittelulinjaston suunnitteluun vaikuttavat muun muassa lähtömateriaalin laatu ja lopputuotteiden käyttötarkoitukset. Yhdyskuntajätteen sisältämä biojäte likaa herkästi muut jätteet ja vaikeuttaa mekaanisesti eroteltujen jätejakeiden uudelleenkäyttöä. Biojätteen poistaminen muiden jätteiden joukosta olisi ensiarvoisen tärkeää mekaanisen erotuksen tehokkuuden kannalta. Mekaaniset erotteluketjut poistavat tehokkaasti biojätettä ja metalleja, mutta lasin ja kuitujen osalta erotusketjujen tehokkuudet jäävät alhaisiksi. Muovien osalta mekaaninen erottelu voi parhaimmillaan ollaan erittäin tehokasta, toisaalta vaatimukset lähtömateriaalin laadulle ovat suuret. Muovien osalta syntypaikkalajittelun ja mekaanisen erottelun yhtäaikainen tehostaminen voisi tarjota ratkaisun kierrätysasteen nostamiseen.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Tutkimusongelmana oli tiedon lisääminen teollisuusvaihteiden kunnossapitopal-veluiden asiakasarvon muodostumiseen liittyen. Työn tavoitteena oli arviointi-mallin rakentaminen asiakasarvon osatekijöiden selvittämiseksi sekä niiden tär-keysjärjestykseen saamiseksi. Tutkimusongelmaa lähestyttiin kolmesta teo-rianäkökulmasta: (1) Arvo-käsitteen ja arvoon läheisesti kytköksissä olevista suorituskyvyn osatekijöiden määrittelystä, (2) kunnossapidon merkityksestä ja roolista yrityksen liiketoiminnassa ja (3) asiakas-toimittaja –suhteen merkityk-sestä asiakasarvon muodostumisessa. Teorianäkökulmien pohjalta muodostettiin teollisuusvaihteiden kunnossapidon viitekehykseen sopiva arvohierarkia, jonka osatekijöiden, arvoelementtien, yhteisvaikutuksesta kunnossapitopalvelun asia-kasarvo syntyy. Työn empiriaosuudessa mallia testattiin käytännössä. Hierarkian arvoelementtien tärkeysjärjestyksen ja kaikille asiakkaille yhteisten arvoelementtien löytämiseksi käytettiin Analyyttisen hierarkiaprosessin (AHP) menetelmää osana asiakasryh-mähaastatteluita. Haastattelut toteutettiin täsmäryhmähaastatteluina ja haastatel-tavat ryhmät koostuivat haastatteluihin valittujen asiakkaiden kunnossapidon vastuuhenkilöistä. Tutkimustuloksina havaittiin, että asiakaskohtaiset arvoelementit ja niiden tärke-ysjärjestys saadaan mallin avulla esiin, mutta asiakaskohtaiset erot arvon muo-dostumisessa ovat suuria. Asiakasarvoon vaikuttavien taustatekijöiden, ainakin yhteistyösuhteen tason ja vaihdannan kohteena olevien palveluiden tason, havait-tiin selittävän osaltaan eroja asiakkaiden välillä. Lisäksi teollisen kunnossapidon toimintaympäristön dynaaminen luonne, jossa eri osatekijöiden keskinäiset riip-puvuussuhteet muuttuvat koko ajan, olisi kyettävä huomioimaan myös asia-kasarvon muodostumisen arvioinnissa.
Resumo:
ICT contributed to about 0.83 GtCO2 emissions where the 37% comes from the telecoms infrastructures. At the same time, the increasing cost of energy has been hindering the industry in providing more affordable services for the users. One of the sources of these problems is said to be the rigidity of the current network infrastructures which limits innovations in the network. SDN (Software Defined Network) has emerged as one of the prominent solutions with its idea of abstraction, visibility, and programmability in the network. Nevertheless, there are still significant efforts needed to actually utilize it to create a more energy and environmentally friendly network. In this paper, we suggested and developed a platform for developing ecology-related SDN applications. The main approach we take in realizing this goal is by maximizing the abstractions provided by OpenFlow and to expose RESTful interfaces to modules which enable energy saving in the network. While OpenFlow is made to be the standard for SDN protocol, there are still some mechanisms not defined in its specification such as settings related to Quality of Service (QoS). To solve this, we created REST interfaces for setting of QoS in the switches which can maximize network utilization. We also created a module for minimizing the required network resources in delivering packets across the network. This is achieved by utilizing redundant links when it is needed, but disabling them when the load in the network decreases. The usage of multi paths in a network is also evaluated for its benefit in terms of transfer rate improvement and energy savings. Hopefully, the developed framework can be beneficial for developers in creating applications for supporting environmentally friendly network infrastructures.
Resumo:
This research report applies the customer value hierarchy model to forestry in order to determine strategic options to enhance the value of LiDAR technology in Russian forestry. The study is conducted as a qualitative case study with semi-structured interviews as a main source of the primary data. The customer value hierarchy model constitutes a theoretical base for the research. Secondary data incorporates information on forest resource management, LiDAR technology and Russian forestry. The model is operationalised using forestry literature and forms a basis for analyses of primary data. Analyses of primary data coupled with comprehension of Russian forest inventory system and knowledge on global forest inventory have led to conclusions on the forest inventory methods selection criteria and the organizations that would benefit the most from LiDAR technology use. The report recommends strategic options for LiDAR technology’s value enhancement in Russian forestry. This work has been conducted as a part of the project ‘Finnish-Russian Forest Academy 2 - Exploiting and Piloting’, which has been supported financially by the South-East Finland- Russia ENPI CBC 2007-2014 Programme.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
Phenomena in cyber domain, especially threats to security and privacy, have proven an increasingly heated topic addressed by different writers and scholars at an increasing pace – both nationally and internationally. However little public research has been done on the subject of cyber intelligence. The main research question of the thesis was: To what extent is the applicability of cyber intelligence acquisition methods circumstantial? The study was conducted in sequential a manner, starting with defining the concept of intelligence in cyber domain and identifying its key attributes, followed by identifying the range of intelligence methods in cyber domain, criteria influencing their applicability, and types of operatives utilizing cyber intelligence. The methods and criteria were refined into a hierarchical model. The existing conceptions of cyber intelligence were mapped through an extensive literature study on a wide variety of sources. The established understanding was further developed through 15 semi-structured interviews with experts of different backgrounds, whose wide range of points of view proved to substantially enhance the perspective on the subject. Four of the interviewed experts participated in a relatively extensive survey based on the constructed hierarchical model on cyber intelligence that was formulated in to an AHP hierarchy and executed in the Expert Choice Comparion online application. It was concluded that Intelligence in cyber domain is an endorsing, cross-cutting intelligence discipline that adds value to all aspects of conventional intelligence and furthermore that it bears a substantial amount of characteristic traits – both advantageous and disadvantageous – and furthermore that the applicability of cyber intelligence methods is partly circumstantially limited.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
LiDAR is an advanced remote sensing technology with many applications, including forest inventory. The most common type is ALS (airborne laser scanning). The method is successfully utilized in many developed markets, where it is replacing traditional forest inventory methods. However, it is innovative for Russian market, where traditional field inventory dominates. ArboLiDAR is a forest inventory solution that engages LiDAR, color infrared imagery, GPS ground control plots and field sample plots, developed by Arbonaut Ltd. This study is an industrial market research for LiDAR technology in Russia focused on customer needs. Russian forestry market is very attractive, because of large growing stock volumes. It underwent drastic changes in 2006, but it is still in transitional stage. There are several types of forest inventory, both with public and private funding. Private forestry enterprises basically need forest inventory in two cases – while making coupe demarcation before timber harvesting and as a part of forest management planning, that is supposed to be done every ten years on the whole leased territory. The study covered 14 companies in total that include private forestry companies with timber harvesting activities, private forest inventory providers, state subordinate companies and forestry software developer. The research strategy is multiple case studies with semi-structured interviews as the main data collection technique. The study focuses on North-West Russia, as it is the most developed Russian region in forestry. The research applies the Voice of the Customer (VOC) concept to elicit customer needs of Russian forestry actors and discovers how these needs are met. It studies forest inventory methods currently applied in Russia and proposes the model of method comparison, based on Multi-criteria decision making (MCDM) approach, mainly on Analytical Hierarchy Process (AHP). Required product attributes are classified in accordance with Kano model. The answer about suitability of LiDAR technology is ambiguous, since many details should be taken into account.
Resumo:
The Mexican dream is the equivalent of the American Dream for Mexico. This thesis explores what is the equivalent of the American Dream for young Mexican adults (25 to 35 year old Mexicans). The aim of the study is to develop an understanding of the core values of young Mexican adults. The study is made for a case company, Expertos Patrimoniales Wealth Management Advisors, who intend to sell financial management services to these young Mexican adults in the next 5 to 10 years. This study implements a cross-cultural consumer behavior framework by David Luna, in order to consider factors like culture, and value systems to uncover the Mexican Dream for young Mexican adults. In order to gather data for this study, key informants were interviewed in specific areas, such as culture, financial consumer behavior and Mexican culture among others. The results suggest that independence is a strong driver for the young Mexican adults, independence from their family, from the corporate hierarchy and men. These core drivers differ from the traditional culture values where hierarchy and a secure job, family which includes the extended family and women´s economic dependency on men have been strong. Images of the future are created in order to understand the young Mexican adults Mexican Dream in the next 5 to 10 years, in order to provide useful information for the case company for the development of products and services that this segment of the Mexican market might find interesting in the near future.
Resumo:
Yhteiskunnan muuttuessa entistä tietovaltaisemmaksi, tiedon jakaminen nähdään kaikkein merkittävimpänä tietoprosessina organisaation kehittymisen kannalta. Tässä pro gradu -tutkielmassa selvitettiin, mitkä tekijät vaikuttavat tiedon jakamiseen asiantuntijatyössä. Tutkimus toteutettiin tapaustutkimuksena ja aineisto analysoitiin teorialähtöisen sisällönanalyysin avulla. Tutkimuksen tulosten perusteella asiantuntijatyötä tekevien tiedon jakamiseen vaikuttavat tekjiät ovat sisäinen motivaatio, yksilöiden välinen luottamus sekä organisaation rakenne ja kulttuuri. Tutkimuksen mukaan työ itsessään palkitsee ja motivoi tiedon jakamiseen, mutta yksilöiden välillä tulee olla hyväntahtoisuuteen ja pätevyyteen liittyvää luottamusta. Organisaation hierarkkisuus, käytettävissä olevan ajan, yhteisöllisyyden ja arvostuksen puute heikentävät tiedon jakamista. Sitä vastoin organisaation avoin kulttuuri tukee tiedon jakamista. Rahallisen palkitsemisen ei nähty vaikuttavan tiedon jakamiseen.
Resumo:
Our goal is to get better understanding of different kind of dependencies behind the high-level capability areas. The models are suitable for investigating present state capabilities or future developments of capabilities in the context of technology forecasting. Three levels are necessary for a model describing effects of technologies on military capabilities. These levels are capability areas, systems and technologies. The contribution of this paper is to present one possible model for interdependencies between technologies. Modelling interdependencies between technologies is the last building block in constructing a quantitative model for technological forecasting including necessary levels of abstraction. This study supplements our previous research and as a result we present a model for the whole process of capability modelling. As in our earlier studies, capability is defined as the probability of a successful task or operation or proper functioning of a system. In order to obtain numerical data to demonstrate our model, we conducted a questionnaire to a group of defence technology researchers where interdependencies between seven representative technologies were inquired. Because of a small number of participants in questionnaires and general uncertainties concerning subjective evaluations, only rough conclusions can be made from the numerical results
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.