42 resultados para Computing and software systems
Resumo:
Reaaliaikainen, ennakoiva kunnonvalvonta on erittäin tärkeä osa modernin tehtaan tai tuotantolinjan toimintaa. Diplomityön teettäjä haluaa edelleen kehittää akustiseen emissioon perustuvaa kunnonvalvonta järjestelmäänsä, jotta siitä olisi enemmän hyötyä asiakkaalle. Diplomityö sisältää johdannonakustiseen emissioon ja akustisiin emissio sensoreihin. Työn tavoitteena oli kehittää päätöksentekojärjestelmä, jota käytettäisiin työn teettäjän valmistamien sensoreiden antaman tiedon automatisoituun analysointiin. Työssä on vertailtu kolmea eri ohjelmistotoimittajaa ja heidän ohjelmiaan, ja tehty ehdotus hankittavasta ohjelmistosta. Lisäksi työssä on kehitetty ohjeita, joiden avulla ohjelmisto ohjelmoidaan tuottamaan reaaliaikaista tietoa ja huolto-ohjeita sen käyttäjille. Lisäksi työssä annetaan ehdotuksia kunnonvalvonta- ja päätöksentekojärjestelmän edelleen kehittämiseen.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
-
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Nomadic workers travel often between different work sites and work mainly outside their regular work place, but often require access to information stored electronically in corporate information systems. While working in field conditions, communication with an information system can be achieved by using mobile technology, i.e. mobile devices and wireless communication. This master’s thesis researches the use of mobile technology to assist nomadic field workers in their tasks. First different mobile technologies are compared and constraints that characterize mobile computing are explained. In the practical part of the thesis client software is developed for a mobile device. The software allows a nomadic construction worker to identify concrete elements and to acquire and update information concerning them. The characteristics of mobile computing and their effect on usability are taken into account when implementing the client software and the software is designed to be as easy to use as possible.
Resumo:
The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.
Resumo:
Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.
Resumo:
The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.
Resumo:
This thesis introduces a real-time simulation environment based on the multibody simulation approach. The environment consists of components that are used in conventional product development, including computer aided drawing, visualization, dynamic simulation and finite element software architecture, data transfer and haptics. These components are combined to perform as a coupled system on one platform. The environment is used to simulate mobile and industrial machines at different stages of a product life time. Consequently, the demands of the simulated scenarios vary. In this thesis, a real-time simulation environment based on the multibody approach is used to study a reel mechanism of a paper machine and a gantry crane. These case systems are used to demonstrate the usability of the real-time simulation environment for fault detection purposes and in the context of a training simulator. In order to describe the dynamical performance of a mobile or industrial machine, the nonlinear equations of motion must be defined. In this thesis, the dynamical behaviour of machines is modelled using the multibody simulation approach. A multibody system may consist of rigid and flexible bodies which are joined using kinematic joint constraints while force components are used to describe the actuators. The strength of multibody dynamics relies upon its ability to describe nonlinearities arising from wearing of the components, friction, large rotations or contact forces in a systematic manner. For this reason, the interfaces between subsystems such as mechanics, hydraulics and control systems of the mechatronic machine can be defined and analyzed in a straightforward manner.
Resumo:
The RPC Detector Control System (RCS) is the main subject of this PhD work. The project, involving the Lappeenranta University of Technology, the Warsaw University and INFN of Naples, is aimed to integrate the different subsystems for the RPC detector and its trigger chain in order to develop a common framework to control and monitoring the different parts. In this project, I have been strongly involved during the last three years on the hardware and software development, construction and commissioning as main responsible and coordinator. The CMS Resistive Plate Chambers (RPC) system consists of 912 double-gap chambers at its start-up in middle of 2008. A continuous control and monitoring of the detector, the trigger and all the ancillary sub-systems (high voltages, low voltages, environmental, gas, and cooling), is required to achieve the operational stability and reliability of a so large and complex detector and trigger system. Role of the RPC Detector Control System is to monitor the detector conditions and performance, control and monitor all subsystems related to RPC and their electronics and store all the information in a dedicated database, called Condition DB. Therefore the RPC DCS system has to assure the safe and correct operation of the sub-detectors during all CMS life time (more than 10 year), detect abnormal and harmful situations and take protective and automatic actions to minimize consequential damages. The analysis of the requirements and project challenges, the architecture design and its development as well as the calibration and commissioning phases represent themain tasks of the work developed for this PhD thesis. Different technologies, middleware and solutions has been studied and adopted in the design and development of the different components and a big challenging consisted in the integration of these different parts each other and in the general CMS control system and data acquisition framework. Therefore, the RCS installation and commissioning phase as well as its performance and the first results, obtained during the last three years CMS cosmic runs, will be
Resumo:
The front end of innovation is regarded as one of the most important steps in building new software products or services, and the most significant benefits in software development can be achieved through improvements in the front end activities. Problems in the front end phase have an impact on customer dissatisfaction with delivered software, and on the effectiveness of the entire software development process. When these processes are improved, the likelihood of delivering high quality software and business success increases. This thesis highlights the challenges and problems related to the early phases of software development, and provides new methods and tools for improving performance in the front end activities of software development. The theoretical framework of this study comprises two fields of research. The first section belongs to the field of innovation management, and especially to the management of the early phases of the innovation process, i.e. the front end of innovation. The second section of the framework is closely linked to the processes of software engineering, especially to the early phases of the software development process, i.e. the practice of requirements engineering. Thus, this study extends the theoretical knowledge and discloses the differences and similarities in these two fields of research. In addition, this study opens up a new strand for academic discussion by connecting these research directions. Several qualitative business research methodologies have been utilized in the individual publications to solve the research questions. The theoretical and managerial contribution of the study can be divided into three areas: 1) processes and concepts, 2) challenges and development needs, and 3) means and methods for the front end activities of software development. First, the study discloses the difference and similarities between the concepts of the front end of innovation and requirements engineering, and proposes a new framework for managing the front end of the software innovation process, bringing business and innovation perspectives into software development. Furthermore, the study discloses managerial perceptions of the similarities and differences in the concept of the front end of innovation between the software industry and the traditional industrial sector. Second, the study highlights the challenges and development needs in the front end phase of software development, especially challenges in communication, such as linguistic problems, ineffective communication channels, a communication gap between users/customers and software developers, and participation of multiple persons in software development. Third, the study proposes new group methods for improving the front end activities of software development, especially customer need assessment, and the elicitation of software requirements.
Resumo:
Cloud computing enables on-demand network access to shared resources (e.g., computation, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the data centers. Software as a service (SaaS) is part of cloud computing. It is one of the cloud service models. SaaS is software deployed as a hosted service and accessed over the Internet. In SaaS, the consumer uses the provider‘s applications running in the cloud. SaaS separates the possession and ownership of software from its use. The applications can be accessed from any device through a thin client interface. A typical SaaS application is used with a web browser based on monthly pricing. In this thesis, the characteristics of cloud computing and SaaS are presented. Also, a few implementation platforms for SaaS are discussed. Then, four different SaaS implementation cases and one transformation case are deliberated. The pros and cons of SaaS are studied. This is done based on literature references and analysis of the SaaS implementations and the transformation case. The analysis is done both from the customer‘s and service provider‘s point of view. In addition, the pros and cons of on-premises software are listed. The purpose of this thesis is to find when SaaS should be utilized and when it is better to choose a traditional on-premises software. The qualities of SaaS bring many benefits both for the customer as well as the provider. A customer should utilize SaaS when it provides cost savings, ease, and scalability over on-premises software. SaaS is reasonable when the customer does not need tailoring, but he only needs a simple, general-purpose service, and the application supports customer‘s core business. A provider should utilize SaaS when it offers cost savings, scalability, faster development, and wider customer base over on-premises software. It is wise to choose SaaS when the application is cheap, aimed at mass market, needs frequent updating, needs high performance computing, needs storing large amounts of data, or there is some other direct value from the cloud infrastructure.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.