749 resultados para Consortial Implementations
Resumo:
Aim: Modelling species at the assemblage level is required to make effective forecast of global change impacts on diversity and ecosystem functioning. Community predictions may be achieved using macroecological properties of communities (MEM), or by stacking of individual species distribution models (S-SDMs). To obtain more realistic predictions of species assemblages, the SESAM framework suggests applying successive filters to the initial species source pool, by combining different modelling approaches and rules. Here we provide a first test of this framework in mountain grassland communities. Location: The western Swiss Alps. Methods: Two implementations of the SESAM framework were tested: a "Probability ranking" rule based on species richness predictions and rough probabilities from SDMs, and a "Trait range" rule that uses the predicted upper and lower bound of community-level distribution of three different functional traits (vegetative height, specific leaf area and seed mass) to constraint a pool of environmentally filtered species from binary SDMs predictions. Results: We showed that all independent constraints expectedly contributed to reduce species richness overprediction. Only the "Probability ranking" rule allowed slightly but significantly improving predictions of community composition. Main conclusion: We tested various ways to implement the SESAM framework by integrating macroecological constraints into S-SDM predictions, and report one that is able to improve compositional predictions. We discuss possible improvements, such as further improving the causality and precision of environmental predictors, using other assembly rules and testing other types of ecological or functional constraints.
Resumo:
WDM (Wavelength-Division Multiplexing) optiset verkot on tällä hetkellä suosituin tapa isojen määrän tietojen siirtämiseen. Jokaiselle liittymälle määrätään reitin ja aallonpituus joka linkin varten. Tarvittavan reitin ja aallon pituuden löytäminen kutsutaan RWA-ongelmaksi. Tämän työn kuvaa mahdollisia kustannuksen mallein ratkaisuja RWA-ongelmaan. Olemassa on paljon erilaisia optimoinnin tavoitteita. Edellä mainittuja kustannuksen malleja perustuu näillä tavoitteilla. Kustannuksen malleja antavat tehokkaita ratkaisuja ja algoritmeja. The multicommodity malli on käsitelty tässä työssä perusteena RV/A-kustannuksen mallille. Myöskin OB käsitelty heuristisia menetelmiä RWA-ongelman ratkaisuun. Työn loppuosassa käsitellään toteutuksia muutamalle mallille ja erilaisia mahdollisuuksia kustannuksen mallein parantamiseen.
Resumo:
Tietoliikenteen suuria linjoja ovat mm. siirtonopeuksien kasvu, IP-tekniikan yleistyminen ja Internet-liitäntöjen kautta käytettävien palvelujen valtaisa lisääntyminen. Kotien verkkoliitännöissä eräänä haasteena on vielä tarjota kohtuullisilla kustannuksilla suuri liikennöintikapasiteetti, mikä voidaan tehdä usealla eri tavalla. Kotien laajakaistaiseen tulevaisuuteen johtaakin useita polkuja, joita tässä diplomityössä on kartoitettu. Työssä on esitelty erilaisia kotitalouksiin soveltuvia nopeita langallisia ja langattomia liitäntätekniikoita. Painopiste on nykyisessä valtavirrassa, kuparikaapelia käyttävässä xDSL-tekniikkaperheessä ja niihin pohjautuvissa verkoissa mukaan lukien ATM-pohjaiset runkoverkot Käytännön liitäntöjen laatua ja toimivuutta on selvitetty mittaamalla perinteistä ADSL-liitäntää sekä uudempaa tekniikkaa edustavaa SHDSL-liitäntää. Liittymiä mitattiin sekä xDSL-mittarilla että spektrianalysaattorilla. Mittauksissa kiinnitettiin huomiota kupariparien toimintaan, kuormituksen ja häiriöiden vaikutukseen sekä liittymän toimintaan käyttäjän kannalta. Mittauksissa todettiin, että kuparikaapeli sietää hyvin häiriöitä myös uusien tekniikoiden käytössä. Tulevaisuutta kartoitettiin haastattelututkimuksella, josta työhön on koostettu operaattoreiden, konsulttien ja käyttäjien näkemyksiä laajakaistaliitäntöjen ja niiden kautta saatavien palveluiden kehittymisestä. Tehty tutkimus osoittaa, että ADSL-tekniikka hyödyntää hyvin sille varattuja taajuusalueita. SHDSL-tekniikassa voidaan lähettää ja vastaanottaa suuria datamääriä molempiin suuntiin.
Resumo:
Työn tavoitteena oli tutkia uuden tuotannonohjausjärjestelmän ja työprosessin käyttöönottoa maantieteellisesti hajautetussa organisaatiossa. Tavoitteena oli myös laatia malli kuinka mittavat järjestelmäkäyttöönotot tulisi viedä läpi hallitusti. Tutkimuksen teoriaosuus on tehty kirjallisuustukimuksena ja asiantuntijahaastatteluiden pohjalta. Teoriaosuudessa käydään läpi mitkä seikat uuden tuotannonohjausjärjestelmän käyttöönotossa ovat tärkeitä onnistumisen kannalta ja kuinka käyttöönottoprojekti tulisi viedä läpi. Työn empiirisessä osuudessa on analysoitu kuinka järjestelmä otettiin käyttöön kohdeyrityksessä. Empiirinen osa on tehty tarkastelemalla käyttöönottoa ja onnistumista on pohdittu suorittamalla käyttäjäkysely. Järjestelmien käyttöönotot epäonnistuvat useassa tapauksessa. Nämä epäonnistumiset johtuvat usein käyttöönotto-organisaation kokemattomuudesta viedä läpi monimutkaisia projekteja, suunnittelun puutteesta, huonosta sitoutumisesta muutokseen tai riittämättömästä resurssien allokoinnista. Ennalta suunnittelu, suunnitelmien pitävyyden tarkastaminen ja mahdollisesti muuttaminen käyttöönoton edetessä ovat avaimia onnistumiseen. Kohdeyrityksessä otetaan erilaisia järjestelmiä käyttöön ajoittain. Nämä käyttöönotot eivät ole kuitenkaan aina sujuneet halutulla tavalla. Tutkimus pyrkii luomaan vaiheittaisen mallin muutokseen valmistautumisesta, suunnittelusta, muutoksen kouluttamisesta, järjestelmän integroimisesta muihin järjestelmiin, muutoksen läpivientiin ja käyttöönoton onnistumisen mittaamisesta. Tutkimuksessa käsitellään uuden projektinhallintajärjestelmän käyttöönoton problematiikan lisäksi uuden työprosessin käyttöönottoa.
Resumo:
Työssä tarkastellaan yleisellä tasolla sovelluspalvelimia ja väliohjelmistoja sekä niille asetettuja vaatimuksia. Erityisesti käytännön työn ratkaisupohjana käytettyyn CORBA-väliohjelmistoteknologiaan perehdytään huolella. Pääpainon saavat kuitenkin käytännön työssä toteutettavat dynaamiset DII- ja DSI-rajapinnat. Teoriaosan lopussa esitellään käytetty CVOPS-työkalu ja sovelluspalvelin, johon dynaaminen rajapinta lisätään. Dynaamisuustuki lisätään sovelluspalvelimen CVOPS-ORB-järjestelmäkomponenttiin, jonka toiminta ja arkkitehtuuri kuvataan. Käytännön osuus jakautuu dynaamisen rajapinnan eri toteutusvaiheiden esittelyyn ja jatkokehityssuunnitelmiin. Työssä toteutettu dynaaminen kutsu- ja palvelurajapinta mahdollistaa pyyntöjen lähettämisen ja vastaanottamisen dynaamisesti. Se lisää joustavuutta asiakas- ja palvelintoteutuksiin, mutta se on toteutukseltaan monimutkaisempi ja suorituskyvyltään heikompi kuin staattinen rajapinta.
Resumo:
UPM-Kymmene Oyj:n Tietohallintopalvelut on konsernin sisäinen palveluyksikkö, joka parantaa ja tukee liiketoimintaa, tuottamalla korkealaatuisia IT-palveluita asiakkailleen. Työssä tutkittiin UPM-Kymmene Oyj:n Tietohallintopalveluiden arvoketjua, tämän arvoketjun hallintaa sekä, mitä asioita tulee ottaa huomioon arvoketjua määritettäessä. Lisäksi tutkitaan lisäarvoa, jonka Tietohallintopalvelut tuottaa palveluiden arvoketjuun. Työssä rajoituttiin tarkastelemaan ainoastaan standardipalveluita, koska nämä palvelut toimitetaan jokaiselle asiakkaalle samanlaisena. Arvoketjua tutkittiin haastattelemalla kolmea avaintoimittajaa sekä lähettämällä kyselylomake suurimpien asiakasyksiköiden tietohallintopäälliköille. Tutkimuksen tavoitteena oli selvittää, mitkä asiat vaikeuttavat arvoketjun toimintaa rajapinnoissa ja mitkä asiat toimivat erityisen hyvin. Toimittaja- ja asiakasrajapinnan lisäksi tutkittiin yrityksen sisäisiä prosesseja, toimintatapoja ja kyvykkyyksiä, jotka vaikuttavat arvoketjun toimivuuteen. Arvoketjua määritettäessä tulee ottaa huomioon palveluiden toteutus, palveluprosessissa mukana olevien järjestelmien toimivuus sekä sisäiset prosessit. Toimittajahaastatteluissa selvisi, että toimittajarajapinnan vahvuudet ovat hyvin toimivat henkilösuhteet, avoin informaationvaihto sekä luottamus. Jatkossa toimittajien olisi myös tutustuttava lähemmin Tietohallintopalveluiden asiakkaisiin, sekä heidän liiketoimintaprosesseihinsa. Asiakaskyselylomake toimi suuntaa-antavana palaute- ja asiakastyytyväisyysselvityksenä. Asiakasrajapinnassa henkilökohtaiset suhteet olivat tärkeässä asemassa, mutta informaationvaihto, tiedottaminen sekä yleisesti kommunikointi nähtiin suurimmiksi ongelmiksi. Tutkielmassa selvisi, että suurimmat esteet arvoketjun saumattomalle toiminnalle ovat Tietohallintopalveluiden sisäisissä prosesseissa ja toimintatavoissa. Esimerkiksi informaatio ei kulje osaston sisällä eikä sen kautta tarpeeksi tehokkaasti.
Resumo:
In this research we are examining what is the status of logistics and operations management in Finnish and Swedish companies. Empirical data is based on the web based questionnaire, which was completed in the end of 2007 and early 2008. Our examination consists of roughly 30 answers from largest manufacturing (highest representation in our sample), trade and logistics/distribution companies. Generally it could be argued that these companies operate in complex environment, where number of products, raw materials/components and suppliers is high. However, usually companies rely on small amount of suppliers per raw material/component (highest frequency is 2), and this was especially the case among Swedish companies, and among those companies, which favoured overseas sourcing. Sample consisted of companies which mostly are operating in an international environment, and are quite often multinationals. Our survey findings reveal that companies in general have taken logistics and information technology as part of their strategy process; utilization of performance measures as well as system implementations have followed the strategy decisions. In the transportation mode side we identify that road transports dominate all transport flow classes (inbound, internal and outbound), followed by sea and air. Surprisingly small amount of companies use railways, but in general we could argue that Swedish companies prefer this mode over Finnish counterparts. With respect of operations outsourcing, we found that more traditional areas of logistics outsourcing are driving factors in company's performance measurement priority. In contrary to previous research, our results indicate that the scope of outsourcing is not that wide in logistics/operations management area, and companies are not planning to outsource more in the near future. Some support is found for more international operations and increased outsourcing activity. From the increased time pressure of companies, we find evidence that local as well as overseas customers expect deliveries within days or weeks, but suppliers usually supply within weeks or months. So, basically this leads into considerable inventory holding. Interestingly local and overseas sourcing strategy does not have that great influence on lead time performance of these particular sourcing areas - local strategy is anyway considerably better in responding on market changes due to shorter supply lead times. In the end of our research work we have completed correlation analysis concerning items asked with Likert scale. Our analysis shows that seeing logistics more like a process rather than function, applying time based management, favouring partnerships and measuring logistics within different performance dimensions results on preferred features and performance found in logistics literature.
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
Unified Threat Management or UTM-devices have created a new way to implement security solutions for different customer needs and segments. Customer and business traffic is more and more Web and application based when security is needed to that level as well. Thesis focuses to explore what opportunities UTM-devices provides for operator acting as a managed security service provider and how to succeed better in the markets. Markets are explored both in the customer interface what customers are expecting form the managed service provides and from technology provider interface what kind of products and services they have for different implementations. Theoretical background is taken from product strategy, networking and product development. These are taken into account when developed and explored opportunities an operator has in managed security business with UTM-devices. In the thesis four main recognized technology vendors and their product and services are compared against operator managed security services needs. Based on the explorations of theory, customer needs and technology a product strategy is proposed for operator acting as a managed security provider.
Resumo:
Aim The aim of this study was to test different modelling approaches, including a new framework, for predicting the spatial distribution of richness and composition of two insect groups. Location The western Swiss Alps. Methods We compared two community modelling approaches: the classical method of stacking binary prediction obtained fromindividual species distribution models (binary stacked species distribution models, bS-SDMs), and various implementations of a recent framework (spatially explicit species assemblage modelling, SESAM) based on four steps that integrate the different drivers of the assembly process in a unique modelling procedure. We used: (1) five methods to create bS-SDM predictions; (2) two approaches for predicting species richness, by summing individual SDM probabilities or by modelling the number of species (i.e. richness) directly; and (3) five different biotic rules based either on ranking probabilities from SDMs or on community co-occurrence patterns. Combining these various options resulted in 47 implementations for each taxon. Results Species richness of the two taxonomic groups was predicted with good accuracy overall, and in most cases bS-SDM did not produce a biased prediction exceeding the actual number of species in each unit. In the prediction of community composition bS-SDM often also yielded the best evaluation score. In the case of poor performance of bS-SDM (i.e. when bS-SDM overestimated the prediction of richness) the SESAM framework improved predictions of species composition. Main conclusions Our results differed from previous findings using community-level models. First, we show that overprediction of richness by bS-SDM is not a general rule, thus highlighting the relevance of producing good individual SDMs to capture the ecological filters that are important for the assembly process. Second, we confirm the potential of SESAM when richness is overpredicted by bS-SDM; limiting the number of species for each unit and applying biotic rules (here using the ranking of SDM probabilities) can improve predictions of species composition
Resumo:
Tampere University of Technology is undergoing a degree reform that started in 2013. One of the major changes in the reform was the integration of compulsory Finnish, Swedish and English language courses to substance courses at the bachelor level. The integration of content and language courses aims at higher quality language learning, more fluency in studies, and increased motivation toward language studies. In addition, integration is an opportunity to optimize the use of resources and to offer courses that are more tailored to the students' field of study and to the skills needed in working life. The reform also aims to increase and develop co-operation between different departments at the university and to develop scientific follow up. This paper gives an overview of the integration process conducted at TUT and gives examples of adjunct CLIL implementations in three different languages.
Resumo:
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand.Here we present a community-driven curation effort, supported by ELIXIR-the European infrastructure for biological information-that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners.As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools.
Resumo:
This Master's thesis addresses the design and implementation of the optical character recognition (OCR) system for a mobile device working on the Symbian operating system. The developed OCR system, named OCRCapriccio, emphasizes the modularity, effective extensibility and reuse. The system consists of two parts which are the graphical user interface and the OCR engine that was implemented as a plug-in. In fact, the plug-in includes two implementations of the OCR engine for enabling two types of recognition: the bitmap comparison based recognition and statistical recognition. The implementation results have shown that the approach based on bitmap comparison is more suitable for the Symbian environment because of its nature. Although the current implementation of bitmap comparison is lacking in accuracy, further development should be done in its direction. The biggest challenges of this work were related to developing an OCR scheme that would be suitable for Symbian OS Smartphones that have limited computational power and restricted resources.
Resumo:
In this thesis programmatic, application-layer means for better energy-efficiency in the VoIP application domain are studied. The work presented concentrates on optimizations which are suitable for VoIP-implementations utilizing SIP and IEEE 802.11 technologies. Energy-saving optimizations can have an impact on perceived call quality, and thus energy-saving means are studied together with those factors affecting perceived call quality. In this thesis a general view on a topic is given. Based on theory, adaptive optimization schemes for dynamic controlling of application's operation are proposed. A runtime quality model, capable of being integrated into optimization schemes, is developed for VoIP call quality estimation. Based on proposed optimization schemes, some power consumption measurements are done to find out achievable advantages. Measurement results show that a reduction in power consumption is possible to achieve with the help of adaptive optimization schemes.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels