998 resultados para Legacy Systems
Resumo:
Despite expectations being high, the industrial take-up of Semantic Web technologies in developing services and applications has been slower than expected. One of the main reasons is that many legacy systems have been developed without considering the potential of theWeb in integrating services and sharing resources.Without a systematic methodology and proper tool support, the migration from legacy systems to SemanticWeb Service-based systems can be a tedious and expensive process, which carries a significant risk of failure. There is an urgent need to provide strategies, allowing the migration of legacy systems to Semantic Web Services platforms, and also tools to support such strategies. In this paper we propose a methodology and its tool support for transitioning these applications to Semantic Web Services, which allow users to migrate their applications to Semantic Web Services platforms automatically or semi-automatically. The transition of the GATE system is used as a case study. © 2009 - IOS Press and the authors. All rights reserved.
Resumo:
This thesis presents experimental investigation of different effects/techniques that can be used to upgrade legacy WDM communication systems. The main issue in upgrading legacy systems is that the fundamental setup, including components settings such as EDFA gains, does not need to be altered thus the improvement must be carried out at the network terminal. A general introduction to optical fibre communications is given at the beginning, including optical communication components and system impairments. Experimental techniques for performing laboratory optical transmission experiments are presented before the experimental work of this thesis. These techniques include optical transmitter and receiver designs as well as the design and operation of the recirculating loop. The main experimental work includes three different studies. The first study involves a development of line monitoring equipment that can be reliably used to monitor the performance of optically amplified long-haul undersea systems. This equipment can provide instant finding of the fault locations along the legacy communication link which in tum enables rapid repair execution to be performed hence upgrading the legacy system. The second study investigates the effect of changing the number of transmitted 1s and Os on the performance of WDM system. This effect can, in reality, be seen in some coding systems, e.g. forward-error correction (FEC) technique, where the proportion of the 1s and Os are changed at the transmitter by adding extra bits to the original bit sequence. The final study presents transmission results after all-optical format conversion from NRZ to CSRZ and from RZ to CSRZ using semiconductor optical amplifier in nonlinear optical loop mirror (SOA-NOLM). This study is mainly based on the fact that the use of all-optical processing, including format conversion, has become attractive for the future data networks that are proposed to be all-optical. The feasibility of the SOA-NOLM device for converting single and WDM signals is described. The optical conversion bandwidth and its limitations for WDM conversion are also investigated. All studies of this thesis employ 10Gbit/s single or WDM signals being transmitted over dispersion managed fibre span in the recirculating loop. The fibre span is composed of single-mode fibres (SMF) whose losses and dispersion are compensated using erbium-doped fibre amplifiers (EDFAs) and dispersion compensating fibres (DCFs), respectively. Different configurations of the fibre span are presented in different parts.
Resumo:
When reengineering legacy systems, it is crucial to assess if the legacy behavior has been preserved or how it changed due to the reengineering effort. Ideally if a legacy system is covered by tests, running the tests on the new version can identify potential differences or discrepancies. However, writing tests for an unknown and large system is difficult due to the lack of internal knowledge. It is especially difficult to bring the system to an appropriate state. Our solution is based on the acknowledgment that one of the few trustable piece of information available when approaching a legacy system is the running system itself. Our approach reifies the execution traces and uses logic programming to express tests on them. Thereby it eliminates the need to programatically bring the system in a particular state, and handles the test-writer a high-level abstraction mechanism to query the trace. The resulting system, called TESTLOG, was used on several real-world case studies to validate our claims.
Resumo:
This thesis proposes how to apply the Semantic Web tech- nologies for the Idea Management Systems to deliver a solution to knowl- edge management and information over ow problems. Firstly, the aim is to present a model that introduces rich metadata annotations and their usage in the domain of Idea Management Systems. Furthermore, the the- sis shall investigate how to link innovation data with information from other systems and use it to categorize and lter out the most valuable elements. In addition, the thesis presents a Generic Idea and Innovation Management Ontology (Gi2MO) and aims to back its creation with a set of case studies followed by evaluations that prove how Semantic Web can work as tool to create new opportunities and leverage the contemporary Idea Management legacy systems into the next level.
Resumo:
Half a decade has passed since the objectives and benefits of autonomic computing were stated, yet even the latest system designs and deployments exhibit only limited and isolated elements of autonomic functionality. From an autonomic computing standpoint, all computing systems – old, new or under development – are legacy systems, and will continue to be so for some time to come. In this paper, we propose a generic architecture for developing fully-fledged autonomic systems out of legacy, non-autonomic components, and we investigate how existing technologies can be used to implement this architecture.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
Resumo:
A large and growing amount of software systems rely on non-trivial coordination logic for making use of third party services or components. Therefore, it is of outmost importance to understand and capture rigorously this continuously growing layer of coordination as this will make easier not only the veri cation of such systems with respect to their original speci cations, but also maintenance, further development, testing, deployment and integration. This paper introduces a method based on several program analysis techniques (namely, dependence graphs, program slicing, and graph pattern analysis) to extract coordination logic from legacy systems source code. This process is driven by a series of pre-de ned coordination patterns and captured by a special purpose graph structure from which coordination speci cations can be generated in a number of di erent formalisms
Resumo:
RESUMO: Com a constante evolução das novas tecnologias de informação, as organizações têm necessidade de implementar novas ferramentas de gestão de forma a gerar vantagens competitivas, é neste sentido que esta dissertação visa propor a implementação do Modelo de Gestão estratégica baseado no Balanced Scorecard numa Empresa Interbancária de Serviços, com o objectivo de auxiliar na criação de capacidades competitivas, mediante uma avaliação de desempenho mais precisa e estruturada. Esta ferramenta surgiu como alternativa aos sistemas antigos e tradicionais cujo objectivo consistia no controlo das actividades realizadas pelos funcionários, sendo que esta metodologia veio colocar a estratégia no centro das atenções e não somente o controlo, mas só em 1992 é que ela foi reconhecida como um processo revolucionário que alterava todo o processo de gestão padrão nas empresas. Nesta dissertação ira se abordar alguns conceitos desta metodologia, enfatizando, o Mapa de Estratégia, vantagens pela obtenção desta metodologia; e um estudo prático da aplicação do Modelo de Gestão Estratégica na EMIS. Por fim, é de realçar que a grande importância que as empresas têm vindo a atribuir a esta metodologia e a investir nela, garante a sua relevância como tema de pesquisa num futuro próximo. ABSTRACT: With the constant development of new information technologies, organizations need to implement new management tools in order to generate competitive advantages, in this sense this thesis aims to propose the implementation of the Strategic Management Model based on Balanced Scorecard in a Company Interbank services with the aim of assisting in the creation of competitive capabilities through a performance assessment more precise and structured. This tool has emerged as an alternative to traditional legacy systems and was aimed at controlling the activities performed by employees, the methodology that has put the strategy in the spotlight instead of control, but not until 1992 was it recognized as a revolutionary process that changed the entire standard management process in companies. In this dissertation we discuss concepts of this methodology, emphasizing the strategic map for obtaining advantages from it, and a practical application of Strategic Management Model in EMIS. Finally, it is noteworthy that the great importance that companies have been giving to this methodology and the investment they have been doing on it, guarantees its relevance as a research subject in the near future.
Resumo:
A natural evolução dos sistemas de informação nas organizações envolve por um lado a instalação de equipamentos actualizados, e por outro a adopção de novas aplicações de suporte ao negócio, acompanhando o desenvolvimento dos mercados, as mudanças no modelo de negócio e a maturação da organização num novo contexto. Muitas vezes esta evolução implica a preservação dos dados existentes e de funcionalidades não cobertas pelas novas aplicações. Este facto leva ao desenvolvimento e execução de processos de migração de dados, de aplicações, e de integração de sistemas legados. Estes processos estão condicionados ao meio tecnológico disponível e ao conhecimento existente sobre os sistemas legados, sendo sensíveis ao contexto em que se desenrolam. Esta dissertação apresenta um estado da arte das abordagens à migração e integração, descreve as diversas alternativas, e ilustra de uma forma sistematizada e comparativa os exercícios realizados usando diferentes abordagens, num ambiente real de migração e integração em mudança.
Resumo:
In the past decade, a number of trends have come together in the general sphere of computing that have profoundly affected libraries. The popularisation of the Internet, the appearance of open and interoperable systems, the improvements within graphics and multimedia, and the generalised installation of LANs are some of the events of the period. Taken together, the result has been that libraries have undergone an important functional change, representing the switch from simple information depositories to information disseminators. Integrated library management systems have not remained unaffected by this transformation and those that have not adapted to the new technological surroundings are now referred to as legacy systems. The article describes the characteristics of systems existing in today's market and outlines future trends that, according to various authors, include the disappearance of the integrated library management systems that have traditionally been sold.
Resumo:
Tässä diplomityössä käsitellään palvelukeskeistä arkkitehtuuria ja sen pohjalta vanhaan järjestelmään rakennetun palvelurajapinnan laajentamista avustavan teknologian avulla. Avustavalla teknologialla automatisoidaan vanhan järjestelmän graafisen ohjelman käyttöliittymän toimintoja verkkopalveluksi. Alussa esitellään palvelukeskeisen arkkitehtuurin määritelmä ja sen mukaisia suunnitteluperiaatteita. Sen jälkeen käydään läpi teoriaa, toteutuksia ja lähestymistapoja vanhojen järjestelmien integroimiseksi osaksi palvelukeskeistä arkkitehtuuria. Microsoft Windows-ympäristön tarjoama tuki avustavalle teknologialle käydään läpi. Palvelurajapinnan laajentamisessa käytettiin mustan laatikon menetelmää, jolla vanhan järjestelmän graafinen ohjelma automatisoidaan avustavan teknologian avulla. Menetelmä osoittautui toimivaksi ja sitä voidaan käyttää vanhojen järjestelmien integroimiseksi osaksi palvelukeskeistä arkkitehtuuria
Resumo:
Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 14:00-15:00 Location: B32/3077 File size: 802Mb Abstract The internal behavior of large-scale software systems cannot be determined on the basis of static (e.g., source code) analysis alone. Kieker provides complementary dynamic analysis capabilities, i.e., monitoring/profiling and analyzing a software system's runtime behavior. Application Performance Monitoring is concerned with continuously observing a software system's performance-specific runtime behavior, including analyses like assessing service level compliance or detecting and diagnosing performance problems. Architecture Discovery is concerned with extracting architectural information from an existing software system, including both structural and behavioral aspects like identifying architectural entities (e.g., components and classes) and their interactions (e.g., local or remote procedure calls). In addition to the Architecture Discovery of Java systems, Kieker supports Architecture Discovery for other platforms, including legacy systems, for instance, inplemented in C#, C++, Visual Basic 6, COBOL or Perl. Thanks to Kieker's extensible architecture it is easy to implement and use custom extensions and plugins. Kieker was designed for continuous monitoring in production systems inducing only a very low overhead, which has been evaluated in extensive benchmark experiments. Please, refer to http://kieker-monitoring.net/ for more information.
Resumo:
Nowadays the use of information and communication technology is becoming prevalent in many aspects of healthcare services from patient registration, to consultation, treatment and pathology tests request. Manual interface techniques have dominated data-capture activities in primary care and secondary care settings for decades. Despites the improvements made in IT, usability issues still remain over the use of I/O devices like the computer keyboard, touch-sensitive screens, light pen and barcodes. Furthermore, clinicians have to use several computer applications when providing healthcare services to patients. One of the problems faced by medical professionals is the lack of data integrity between the different software applications which in turn can hinder the provision of healthcare services tailored to the needs of the patients. The use of digital pen and paper technology integrated with legacy medical systems hold the promise of improving healthcare quality. This paper discusses the issue of data integrity in e-health systems and proposes the modelling of "Smart Forms" via semiotics to potentially improve integrity between legacy systems, making the work of medical professionals easier and improve the quality of care in primary care practices and hospitals.
Resumo:
As one of the largest and most complex organizations in the world, the Department of Defense (DoD) faces many challenges in solving its well-documented financial and related business operations and system problems. The DoD is in the process of implementing modern multifunction enterprise resource planning (ERP) systems to replace many of its outdated legacy systems. This paper explores the ERP implementations of the DoD and seeks to determine the impact of the ERP implementations on the alignment of the DoD’s business and IT strategy. A brief overview of the alignment literature and background on ERP are followed by case study analysis of the DoD ERP development and current implementation status. Lastly, the paper explores the current successes and failures of the ERP implementation and the impact on the DoD’s goal of strategic alignment.
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.