43 resultados para Optics in computing
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
This thesis comprises seven peer-reviewed articles and examines systems and applications suitable for increasing Future Force Warrior performance, minimizing collateral damage, improving situational awareness and Common Operational Picture. Based on a literature study, missing functionalities of Future Force Warrior were identified and new ideas, concepts and solutions were created as part of early stages of Systems of Systems creation. These introduced ideas have not yet been implemented or tested in combat and for this reason benefit analyses are excluded. The main results of this thesis include the following: A new networking concept, Wireless Polling Sensor Network, which is a swarm of a few Unmanned Aerial Vehicles forming an ad-hoc network and polling a large number of fixed sensor nodes. The system is more robust in a military environment than traditional Wireless Sensor Networks. A Business Process approach to Service Oriented Architecture in a tactical setting is a concept for scheduling and sharing limited resources. New components to military Service Oriented Architecture have been introduced in the thesis. Other results of the thesis include an investigation of the use of Free Space Optics in tactical communications, a proposal for tracking neutral forces, a system for upgrading simple collaboration tools for command, control and collaboration purposes, a three-level hierarchy of Future Force Warrior, and methods for reducing incidents of fratricide.
Resumo:
Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.
Resumo:
In metallurgic plants a high quality metal production is always required. Nowadays soft computing applications are more often used for automation of manufacturing process and quality control instead of mechanical techniques. In this thesis an overview of soft computing methods presents. As an example of soft computing application, an effective model of fuzzy expert system for the automotive quality control of steel degassing process was developed. The purpose of this work is to describe the fuzzy relations as quality hypersurfaces by varying number of linguistic variables and fuzzy sets.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
Tutkimuksen selvitettiin miten skenaarioanalyysia voidaan käyttää uuden teknologian tutkimisessa. Työssä havaittiin, että skenaarioanalyysin soveltuvuuteen vaikuttaa eniten teknologisen muutoksen taso ja saatavilla olevan tiedon luonne. Skenaariomenetelmä soveltuu hyvin uusien teknologioiden tutkimukseen erityisesti radikaalien innovaatioiden kohdalla. Syynä tähän on niihin liittyvä suuri epävarmuus, kompleksisuus ja vallitsevan paradigman muuttuminen, joiden takia useat muut tulevaisuuden tutkimuksen menetelmät eivät ole tilanteessa käyttökelpoisia. Työn empiirisessä osiossa tutkittiin hilaverkkoteknologian tulevaisuutta skenaarioanalyysin avulla. Hilaverkot nähtiin mahdollisena disruptiivisena teknologiana, joka radikaalina innovaationa saattaa muuttaa tietokonelaskennan nykyisestä tuotepohjaisesta laskentakapasiteetin ostamisesta palvelupohjaiseksi. Tällä olisi suuri vaikutus koko nykyiseen ICT-toimialaan erityisesti tarvelaskennan hyödyntämisen ansiosta. Tutkimus tarkasteli kehitystä vuoteen 2010 asti. Teorian ja olemassa olevan tiedon perusteella muodostettiin vahvaan asiantuntijatietouteen nojautuen neljä mahdollista ympäristöskenaariota hilaverkoille. Skenaarioista huomattiin, että teknologian kaupallinen menestys on vielä monen haasteen takana. Erityisesti luottamus ja lisäarvon synnyttäminen nousivat tärkeimmiksi hilaverkkojen tulevaisuutta ohjaaviksi tekijöiksi.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
Tietokonejärjestelmän osien ja ohjelmistojen suorituskykymittauksista saadaan tietoa,jota voidaan käyttää suorituskyvyn parantamiseen ja laitteistohankintojen päätöksen tukena. Tässä työssä tutustutaan suorituskyvyn mittaamiseen ja mittausohjelmiin eli ns. benchmark-ohjelmistoihin. Työssä etsittiin ja arvioitiin eri tyyppisiä vapaasti saatavilla olevia benchmark-ohjelmia, jotka soveltuvat Linux-laskentaklusterin suorituskyvynanalysointiin. Benchmarkit ryhmiteltiin ja arvioitiin testaamalla niiden ominaisuuksia Linux-klusterissa. Työssä käsitellään myös mittausten tekemisen ja rinnakkaislaskennan haasteita. Benchmarkkeja löytyi moneen tarkoitukseen ja ne osoittautuivat laadultaan ja laajuudeltaan vaihteleviksi. Niitä on myös koottu ohjelmistopaketeiksi, jotta laitteiston suorituskyvystä saisi laajemman kuvan kuin mitä yhdellä ohjelmalla on mahdollista saada. Olennaista on ymmärtää nopeus, jolla dataa saadaan siirretyä prosessorille keskusmuistista, levyjärjestelmistä ja toisista laskentasolmuista. Tyypillinen benchmark-ohjelma sisältää paljon laskentaa tarvitsevan matemaattisen algoritmin, jota käytetään tieteellisissä ohjelmistoissa. Benchmarkista riippuen tulosten ymmärtäminen ja hyödyntäminen voi olla haasteellista.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
Käyttäjien tunnistaminen tietojärjestelmissä on ollut yksi tietoturvan kulmakivistä vuosikymmenten ajan. Ajatus käyttäjätunnuksesta ja salasanasta on kaikkein kustannustehokkain ja käytetyin tapa säilyttää luottamus tietojärjestelmän ja käyttäjien välillä. Tietojärjestelmien käyttöönoton alkuaikoina, jolloin yrityksissä oli vain muutamia tietojärjestelmiä ja niitä käyttivät vain pieni ryhmä käyttäjiä, tämä toimintamalli osoittautui toimivaksi. Vuosien mittaan järjestelmien määrä kasvoi ja sen mukana kasvoi salasanojen määrä ja monimuotoisuus. Kukaan ei osannut ennustaa, kuinka paljon salasanoihin liittyviä ongelmia käyttäjät kohtaisivat ja kuinka paljon ne tulisivat ruuhkauttamaan yritysten käyttäjätukea ja minkälaisia tietoturvariskejä salasanat tulisivat aiheuttamaan suurissa yrityksissä. Tässä diplomityössä tarkastelemme salasanojen aiheuttamia ongelmia suuressa, globaalissa yrityksessä. Ongelmia tarkastellaan neljästä eri näkökulmasta; ihmiset, teknologia, tietoturva ja liiketoiminta. Ongelmat osoitetaan esittelemällä tulokset yrityksen työntekijöille tehdystä kyselystä, joka toteutettiin osana tätä diplomityötä. Ratkaisu näihin ongelmiin esitellään keskitetyn salasanojenhallintajärjestelmän muodossa. Järjestelmän eri ominaisuuksia arvioidaan ja kokeilu -tyyppinen toteutus rakennetaan osoittamaan tällaisen järjestelmän toiminnallisuus.