794 resultados para information systems applications
Resumo:
In recent years, Semantic Web (SW) research has resulted in significant outcomes. Various industries have adopted SW technologies, while the ‘deep web’ is still pursuing the critical transformation point, in which the majority of data found on the deep web will be exploited through SW value layers. In this article we analyse the SW applications from a ‘market’ perspective. We are setting the key requirements for real-world information systems that are SW-enabled and we discuss the major difficulties for the SW uptake that has been delayed. This article contributes to the literature of SW and knowledge management providing a context for discourse towards best practices on SW-based information systems.
Resumo:
Nykyaikaisessa liiketoimintaympäristössä yritysten kriittisiksi resursseiksi ovat muodostuneet liiketoimintaa tukevat tietojärjestelmät. Mahdollisuus hyödyntää näitä resursseja riippuu ko. liiketoiminnalle kriittisten järjestelmien luotettavuudesta ja hyödynnettävien sovellusten saatavuudesta. Eräs tilanne jossa järjestelmien kyky tukea todellisia liiketoimintaprosesseja vaarantuu on katastrofi. Vaikutukseltaan katastrofi voi olla paikallinen tai kattaa laajojakin alueita. Eri tyyppisiin katastrofeihin on varauduttava niiden edellyttämin tavoin. Eräs kriittisten tietojärjestelmien arkkitehtuuriin vaikuttanut trendi 90-luvulla on ollut client/server lähestymistapa. Client/server paradigman mukaan sovellus jaetaan tasoihin siten että esitys-, sovellus- ja tietokantakerrokset voidaan erottaa fyysisesti toisistaan näiden silti muodostaessa loogisesti yhtenäisen kokonaisuuden. Liiketoiminnan näkökulmasta 90- luvun mullistavia IT-uutuuksia olivat toiminnanohjausjärjestelmät, joiden avulla oli mahdollista hallita koko tuotantoketjua ja muita prosessikokonaisuuksia lähes reaaliajassa. Monikerroksisten toiminnanohjausjärjestelmien luotettavuus on osoittautunut haastavavaksi sillä kaikkien kerrosten suojaaminen kaikilta mahdollisilta katastrofeilta täydellisesti on nykyisellä teknologialla mahdotonta. Kompromissien tekemiseksi on oltava selvillä kunkin menetetyn prosessin aiheuttamista taloudellisista ja liiketoiminnallisista vaikutuksista. Tämän vuoksi juuri toiminnanohjausjärjestelmät ovat mielenkiintoisia, vaikuttavathan ne liiketoimintaprosesseihin läpi koko yrityksen prosessiketjun. Monikerroksisten client/server arkkitehtuuriin pohjautuvien toiminnanohjausjärjestelmien suojaamisessa katastrofeilta onkin sovellettava useita tekniikoita ja teknologioita, ja yhdistettävä kokonaisuus prosessikehykseen. Näin voidaan luoda suunnitelmallinen osa IT strategiaa, joka ottaa kantaa liiketoiminnan jatkuvuuteen katastrofitilanteessa ja mahdollistaa nopean ja täydellisen palautumisen kaikissa olosuhteissa.
Resumo:
The purpose of this Thesis was to study what is the present situation of Business Intelligence of the company unit. This means how efficiently unit uses possibilities of modern information management systems. The aim was to resolve how operative informa-tion management of unit’s tender process could be improved by modern information technology applications. This makes it possible that tender processes could be faster and more efficiency. At the beginning it was essential to acquaint oneself with written literature of Business Intelligence. Based on Business Intelligence theory is was relatively easy but challenging to search and discern how tender business could be improved by methods of Busi-ness Intelligence. The empirical phase of this study was executed as qualitative research method. This phase includes theme and natural interviews on the company. Problems and challenges of tender process were clarified in a part an empirical phase. Group of challenges were founded when studying information management of company unit. Based on theory and interviews, group of improvements were listed which company could possible do in the future when developing its operative processes.
Resumo:
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.
Resumo:
The management of port-related supply chains is challenging due to the complex and heterogeneous operations of the ports with several actors and processes. That is why the importance of information sharing is emphasised in the ports. However, the information exchange between different port-related actors is often cumbersome and it still involves a lot of manual work and paper. Major ports and port-related actors usually have advanced information systems in daily use but these systems are seldom interoperable with each other, which prevents economies of scale to be reached. Smaller ports and companies might not be equipped with electronic data transmission at all. This is the final report of the Mobile port (MOPO) project, which has sought ways to improve the management and control of port-related sea and inland traffic with the aid of ICT technologies. The project has studied port community systems (PCS) used worldwide, evaluated the suitability of a PCS for the Finnish port operating environment and created a pilot solution of a Finnish PCS in the port of HaminaKotka. Further, the dry port concept and its influences on the transportation system have been explored. The Mobile Port project comprised of several literature reviews, interviews of over 50 port-related logistics and/or ICT professionals, two different kinds of simulation models as well as designing and implementing of the pilot solution of the Finnish PCS. The results of these multiple studies are summarised in this report. Furthermore, recommendations for future actions and the topics for further studies are addressed in the report. The study revealed that the information sharing in a typical Finnish port-related supply chain contains several bottlenecks that cause delays in shipments and waste resources. The study showed that many of these bottlenecks could be solved by building a port community system for the Finnish port community. Almost 30 different kinds of potential services or service entities of a Finnish PCS were found out during the study. The basic requirements, structure, interfaces and operation model of the Finnish PCS were also defined in the study. On the basis of the results of the study, a pilot solution of the Finnish PCS was implemented in the port of HaminaKotka. The pilot solution includes a Portconnect portal for the Finnish port community system (available at https://www.portconnect.fi) and two pilot applications, which are a service for handling the information flows concerning the movements of railway wagons and a service for handling the information flows between Finnish ports and Finland-Russian border. The study also showed that port community systems can be used to improve the environmental aspects of logistics in two different ways: 1) PCSs can bring direct environmental benefits and 2) PCSs can be used as an environmental tool in a port community. On the basis of the study, the development of the Finnish port community system should be continued by surveying other potential applications for the Finnish PCS. It is also important to study if there is need and resources to extend the Finnish PCS to operate in several ports or even on a national level. In the long run, it could be reasonable to clarify whether there would be possibilities to connect the Finnish PCS as a part of Baltic Sea wide, European-wide or even worldwide maritime and port-related network in order to get the best benefit from the system
Resumo:
Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.
Resumo:
Internet of Things or IoT is revolutionizing the world we are living in, similarly the way Internet and the web did few decades ago. It is changing how we interact with the things surrounding us. Electronic health and remote patient monitoring are the ways of utilizing these technological improvements towards the healthcare. There are many applications of IoT in eHealth such as, it will open the gate to provide healthcare to the remote areas of the world, where healthcare through traditional hospital systems cannot be provided. To connect these new eHealth IoT systems with the existing healthcare information systems, we can use the existing interoperability standards commonly used in healthcare information systems. In this thesis we implemented an eHealth IoT system based on Health Level 7 interoperability standard for continuous data transmission. There is not much previous work done in implementing the HL7 for continuous sensor data transmission. Some of the previous work was limited to sensors which are not continuous in nature and some of it is only theatrical architecture. This thesis aims to prove that it is possible to implement an eHealth IoT system by using sensors which require continues data transmission, such as respiratory sensors, and to connect it with the existing eHealth information system semantically by using HL7 interoperability standard. This system will be beneficial in implementing eHealth IoT systems for those patients, who requires continuous healthcare personal monitoring. This includes elderly people and patients, whose health need to be monitored constantly. To implement the architecture, HL7 v2.5 is selected due to its ease of implementation and low size. We selected some open source technologies because of their open licenses and large developer community. We will also review the most efficient technology available in every layer of eHealth IoT system and will propose an efficient system.
Resumo:
A conceptual problem that appears in different contexts of clustering analysis is that of measuring the degree of compatibility between two sequences of numbers. This problem is usually addressed by means of numerical indexes referred to as sequence correlation indexes. This paper elaborates on why some specific sequence correlation indexes may not be good choices depending on the application scenario in hand. A variant of the Product-Moment correlation coefficient and a weighted formulation for the Goodman-Kruskal and Kendall`s indexes are derived that may be more appropriate for some particular application scenarios. The proposed and existing indexes are analyzed from different perspectives, such as their sensitivity to the ranks and magnitudes of the sequences under evaluation, among other relevant aspects of the problem. The results help suggesting scenarios within the context of clustering analysis that are possibly more appropriate for the application of each index. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.
Resumo:
Web service-based application is an architectural style, where a collection of Web services communicate to each other to execute processes. With the popularity increase of Web service-based applications and since messages exchanged inside of this applications can be complex, we need tools to simplify the understanding of interrelationship among Web services. This work present a description of a graphical representation of Web service-based applications and the mechanisms inserted among Web service requesters and providers to catch information to represent an application. The major contribution of this paper is to discus and use HTTP and SOAP information to show a graphical representation similar to a UML sequence diagram of Web service-based applications.
Resumo:
Service provisioning is a challenging research area for the design and implementation of autonomic service-oriented software systems. It includes automated QoS management for such systems and their applications. Monitoring, Diagnosis and Repair are three key features of QoS management. This work presents a self-healing Web service-based framework that manages QoS degradation at runtime. Our approach is based on proxies. Proxies act on meta-level communications and extend the HTTP envelope of the exchanged messages with QoS-related parameter values. QoS Data are filtered over time and analysed using statistical functions and the Hidden Markov Model. Detected QoS degradations are handled with proxies. We experienced our framework using an orchestrated electronic shop application (FoodShop).