39 resultados para Computer systems organization: general-emerging technologies
Resumo:
Ohjelmistojen tietoturva on noussut viime aikoina entistä tärkeämpään rooliin. Ohjelmistojen suunnittelu pitää alusta alkaen hoitaa siten, että tietoturva tulee huomioitua. Ohjelman helppokäyttöisyys ei saisi ajaa tietoturvan edelle, eikä myöskään ohjeiden lukematta jättäminen saa tarkoittaa tietoturvan menetystä. Tärkeä osa ohjelmistojen tietoturvaa on myös ohjelmiston laillinen käyttö. Se miten laiton käyttö estetään sen sijaan on erittäin vaikeaa toteuttaa nykyjärjestelmissä. Työn tarkoituksena oli tutkia Intellitel Communications Oy:n sanomayhdyskäytävää, Intellitel Messaging Gateway, tuotetietoturvan näkökulmasta, löytää sieltä mahdolliset virheet ja myös korjata ne.
Resumo:
Virtuaalinen yhteisö voidaan määritellä ihmisten, eritasoisten suhteiden, yhteisen tarkoituksen ja tietoteknisten järjestelmien muodostamaksi kokonaisuudeksi. Järjestelmäratkaisut muodostavat yhteisön toiminnan pelikentän, virtuaaliareenan. Virtuaaliareena on siten erottamaton osa virtuaalista yhteisöä. Tutkielman tarkoituksena on tutkia, millainen on virtuaalisen yhteisön kehittämisprosessi ja elinkaari. Lisäksi tutkielmassa kuvataan menetelmät, joiden avulla yhteisön jäsenten tarpeita on mahdollista kartoittaa toiminnan eri vaiheissa. Näitä menetelmiä voidaan käyttää hyväksi erityyppisten virtuaaliympäristöjen kehitystyössä. Käytännön kehittämistyötä tarkastellaan Vaikuttamo –casen kautta. Tavoitteena oli selvittää, kuinka Vaikuttamoa palveleva virtuaaliareena suunniteltiin ja mitkä tekijät ovat vauhdittaneet Vaikuttamon kehittymistä toimivaksi yhteisöksi. Tutkimusmenetelmänä käytettiin teemahaastattelua ja lisäksi hyödynnettiin toukokuussa 2003 toteutetun web-kyselyn aineistoa. Tulosten perusteella havaittiin, että kirjallisuudessa kuvattu kehittämisprosessi ei olisi soveltunut Vaikuttamon tarpeisiin, koska kyseessä oli kokonaan uudentyyppinen yhteisö. Vaikuttamon menestyksen avaintekijöinä voidaan pitää paikallisuutta, varsinaisen yhteisön ulkopuolista ohjausta sekä vahvoja sidoksia Hämeenlinnan alueen kouluihin ja opettajiin.
Resumo:
This thesis deals with improving international airport baggage supply chain management (SCM) by means of information technology and new baggage handling system. This study aims to focus on supply chain visibility in practice and to suggest different ways to improve the supply chain performance through information sharing. The objective is also to define how radio frequency identification (RFID) and enterprise resource planning (ERP) can make processes more transparent. In order to get full benefits from processes, effective business process management and monitoring as well as the key performance indicators must be defined, implemented and visualized through e.g. dashboard views for different roles. As an outcome of the research the need for the use of information technology systems and more advanced technologies, e.g. RFID in the supply chain management is evident. Sophisticated ERP is crucial in boosting SCM business processes and profitability. This would be beneficial for dynamic decision making as well in the airport and airline supply chain management. In the long term, economic aspects support the actions I have suggested in order to make production more flexible in reacting to quick changes.
Resumo:
Tämän diplomityön tavoitteena on kuvata suunnitelma, jossa alun perin WWW-ympäristöön kehitettyä palvelua muokataan siten, että palvelu skaalautuu mahdollisimman hyvin tulevaisuuden laajennuksiin. Selainpohjaisten palveluiden lisäksi esimerkiksi mobiili-palvelut ja erilaiset työpöytäsovellusintegraatiot ovat kasvattaneet suosiotaan. Samoin eri palveluiden välisestä yhteistoiminnasta on kasvanut merkittävä osa Internet-palveluiden loppukäyttäjilleen tarjoamaa palvelukokonaisuutta. Esimerkkejä WWW-palveluiden integroinneista päätelaitteille ovat hakukoneiden ja pikaviestimien mobiili-versiot, ja palveluiden yhteistoiminnasta erilaisten uutispalveluiden ja sosiaalisten palveluiden, kuten Facebook, väliset yhteisölinkitykset. Tässä diplomityössä selvitetään aluksi Internet-pohjaisten palveluiden kehitystä sekä tutustutaan tarkemmin palveluiden monikanavaisuuteen. Tämän jälkeen käydään läpi loppukäyttäjien saatavilla olevia päätelaitteita verkkoyhteyksineen ja WWW-palveluiden suunnittelumalleja. Suunnittelun lähtökohtana oli se, että erilaisten päätelaitteiden, päätelaitteiden ohjelmistojen ja käytössä olevien verkkoyhteyksien muodostamien palvelualustojen liittäminen palveluun olisi mahdollisimman yksinkertaista. Ja tukea uusien palvelualustojen käyttäjien ja sisällön sovittamista olemassa olevaan palveluun. Työn lopputuloksena on suunnitelma, joka pohjautuu välikerroksen rakentamiseen uusien palvelualustojen ja vanhan palvelun väliin. Palveluun tarjotaan välikerroksen kautta personoitu rajapinta luotetuille asiakkaille sekä kaikille avoin julkinen rajapinta. Välikerros suunniteltiin yksinkertaisella REST-arkkitehtuurityylillä, mikä mahdollistaa palvelun tarjonnan turvallisesti ja tehokkaasti. Tähän välikerrokseen lisätään käyttäjän- ja sisällönhallinnan komponentteja pitämään huolen palvelun eheydestä. Tämä diplomityö osoittaa, että oikealla arkkitehtuurilla suunniteltu monikerroksinen väliohjelmisto tarjoaa tehokkaan tavan integroida ja hallinnoida uusia alustoja.
Resumo:
The Thesis concentrates on two central terms – Technology park and the resource-based view of the organization. General attention is devoted to competencies and capabilities of organizations that operate in foreign environment. It is difficult to go abroad without any experience and support from local government, especially for small or medium company. Technology and Science parks are the main sources of competitive advantage for this kind of organizations. They provide a huge range of services as well as business consultations and financial support on different stages of companies’ development. The Thesis was made with the assistance of Technopolis Oy in Lappeenranta. During the research companies in Finland and Russia were interviewed. Based on empirical findings important capabilities for entering foreign market were identified and some recommendations for the Technology park were given.
Resumo:
The future of privacy in the information age is a highly debated topic. In particular, new and emerging technologies such as ICTs and cognitive technologies are seen as threats to privacy. This thesis explores images of the future of privacy among non-experts within the time frame from the present until the year 2050. The aims of the study are to conceptualise privacy as a social and dynamic phenomenon, to understand how privacy is conceptualised among citizens and to analyse ideal-typical images of the future of privacy using the causal layered analysis method. The theoretical background of the thesis combines critical futures studies and critical realism, and the empirical material is drawn from three focus group sessions held in spring 2012 as part of the PRACTIS project. From a critical realist perspective, privacy is conceptualised as a social institution which creates and maintains boundaries between normative circles and preserves the social freedom of individuals. Privacy changes when actors with particular interests engage in technology-enabled practices which challenge current privacy norms. The thesis adopts a position of technological realism as opposed to determinism or neutralism. In the empirical part, the focus group participants are divided into four clusters based on differences in privacy conceptions and perceived threats and solutions. The clusters are fundamentalists, pragmatists, individualists and collectivists. Correspondingly, four ideal-typical images of the future are composed: ‘drift to low privacy’, ‘continuity and benign evolution’, ‘privatised privacy and an uncertain future’, and ‘responsible future or moral decline’. The images are analysed using the four layers of causal layered analysis: litany, system, worldview and myth. Each image has its strengths and weaknesses. The individualistic images tend to be fatalistic in character while the collectivistic images are somewhat utopian. In addition, the images have two common weaknesses: lack of recognition of ongoing developments and simplistic conceptions of privacy based on a dichotomy between the individual and society. The thesis argues for a dialectical understanding of futures as present images of the future and as outcomes of real processes and mechanisms. The first steps in promoting desirable futures are the awareness of privacy as a social institution, the awareness of current images of the future, including their assumptions and weaknesses, and an attitude of responsibility where futures are seen as the consequences of present choices.
Resumo:
Työn tavoitteena on selvittää, minkälaisia sähkö- ja hybridibusseja maailmalla on käytössä. Työssä tutustutaan sähkö- ja hybridibussien tekniikkaan yleisellä tasolla. Tarkastellaan eri valmistajien bussien tekniikoita myös yksityiskohtaisemmin. Vertaillaan hieman bussien suorituskykyjä. Tuloksena tarkasteltavien bussien tärkeimmät ominaisuudet kerättiin taulukoihin ja vertailtiin niitä. Saatiin selville, että sähkö- ja hybridibussivalikoima on hyvin laaja. Yhtä optimaalista sähkö- tai hybridijärjestelmää ei ole vielä kehitetty vaan busseissa käytetään erilaisia sovelluksia esimerkiksi moottorin, voimansiirron ja energiavaraston osalta.
Resumo:
Presentation of Jussi-Pekka Hakkarainen, held at the Emtacl15 conference on the 20th of April 2015 in Trondheim, Norway.
Resumo:
The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Avoimesta innovaatiosta ja innovaatioiden tehokkaasta hyödyntämisestä on tulossa tärkeitä osia yritysten T&K-prosesseihin. Diplomityön tarkoituksena on luoda viitekehys teknologioiden, jotka eivät kuulu yrityksen ydinliiketoimintaan, tehokkaampaan hallinnointiin tutkimusorganisaatiossa. Konstruktiivinen viitekehys on rakennettu pohjautuen aineettomien pääomien johtamisen ja portfolion hallinnoinnin teorioihin. Lisäksi työssä määritellään työkaluja jatekniikoita ylijäämäteknologioiden arviointiin. Uutta ylijäämäteknologioiden portfoliota voidaan hyödyntää hakukoneena, ideapankkina, kommunikaatiotyökaluna tai teknologioiden markkinapaikkana. Sen johtaminen koostuu tietojen dokumentoinnista järjestelmään, teknologioiden arvioinnista ja portfolion päivityksestä ja ylläpidosta.
Resumo:
This thesis seeks to answer, if communication challenges in virtual teams can be overcome with the help of computer-mediated communication. Virtual teams are becoming more common work method in many global companies. In order for virtual teams to reach their maximum potential, effective asynchronous and synchronous methods for communication are needed. The thesis covers communication in virtual teams, as well as leadership and trust building in virtual environments with the help of CMC. First, the communication challenges in virtual teams are identified by using a framework of knowledge sharing barriers in virtual teams by Rosen et al. (2007) Secondly, the leadership and trust in virtual teams are defined in the context of CMC. The performance of virtual teams is evaluated in the case study by exploiting these three dimensions. With the help of a case study of two virtual teams, the practical issues related to selecting and implementing communication technologies as well as overcoming knowledge sharing barriers is being discussed. The case studies involve a complex inter-organisational setting, where four companies are working together in order to maintain a new IT system. The communication difficulties are related to inadequate communication technologies, lack of trust and the undefined relationships of the stakeholders and the team members. As a result, it is suggested that communication technologies are needed in order to improve the virtual team performance, but are not however solely capable of solving the communication challenges in virtual teams. In addition, suitable leadership and trust between team members are required in order to improve the knowledge sharing and communication in virtual teams.
Resumo:
The RPC Detector Control System (RCS) is the main subject of this PhD work. The project, involving the Lappeenranta University of Technology, the Warsaw University and INFN of Naples, is aimed to integrate the different subsystems for the RPC detector and its trigger chain in order to develop a common framework to control and monitoring the different parts. In this project, I have been strongly involved during the last three years on the hardware and software development, construction and commissioning as main responsible and coordinator. The CMS Resistive Plate Chambers (RPC) system consists of 912 double-gap chambers at its start-up in middle of 2008. A continuous control and monitoring of the detector, the trigger and all the ancillary sub-systems (high voltages, low voltages, environmental, gas, and cooling), is required to achieve the operational stability and reliability of a so large and complex detector and trigger system. Role of the RPC Detector Control System is to monitor the detector conditions and performance, control and monitor all subsystems related to RPC and their electronics and store all the information in a dedicated database, called Condition DB. Therefore the RPC DCS system has to assure the safe and correct operation of the sub-detectors during all CMS life time (more than 10 year), detect abnormal and harmful situations and take protective and automatic actions to minimize consequential damages. The analysis of the requirements and project challenges, the architecture design and its development as well as the calibration and commissioning phases represent themain tasks of the work developed for this PhD thesis. Different technologies, middleware and solutions has been studied and adopted in the design and development of the different components and a big challenging consisted in the integration of these different parts each other and in the general CMS control system and data acquisition framework. Therefore, the RCS installation and commissioning phase as well as its performance and the first results, obtained during the last three years CMS cosmic runs, will be
Resumo:
Efficient designs and operations of water and wastewater treatment systems are largely based on mathematical calculations. This even applies to training in the treatment systems. Therefore, it is necessary that calculation procedures are developed and computerised a priori for such applications to ensure effectiveness. This work was aimed at developing calculation procedures for gas stripping, depth filtration, ion exchange, chemical precipitation, and ozonation wastewater treatment technologies to include them in ED-WAVE, a portable computer based tool used in design, operations and training in wastewater treatment. The work involved a comprehensive online and offline study of research work and literature, and application of practical case studies to generate ED-WAVE compatible representations of the treatment technologies which were then uploaded into the tool.