853 resultados para 280112 Information Systems Development Methodologies
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.
Resumo:
Information systems (IS) outsourcing projects often fail to achieve initial goals. To avoid project failure, managers need to design formal controls that meet the specific contextual demands of the project. However, the dynamic and uncertain nature of IS outsourcing projects makes it difficult to design such specific formal controls at the outset of a project. It is hence crucial to translate high-level project goals into specific formal controls during the course of a project. This study seeks to understand the underlying patterns of such translation processes. Based on a comparative case study of four outsourced software development projects, we inductively develop a process model that consists of three unique patterns. The process model shows that the performance implications of emergent controls with higher specificity depend on differences in the translation process. Specific formal controls have positive implications for goal achievement if only the stakeholder context is adapted, while they are negative for goal achievement if in the translation process tasks are unintendedly adapted. In the latter case projects incrementally drift away from their initial direction. Our findings help to better understand control dynamics in IS outsourcing projects. We contribute to a process theoretic understanding of IS outsourcing governance and we derive implications for control theory and the IS project escalation literature.
Resumo:
Knowledge management is critical for the success of virtual communities, especially in the case of distributed working groups. A representative example of this scenario is the distributed software development, where it is necessary an optimal coordination to avoid common problems such as duplicated work. In this paper the feasibility of using the workflow technology as a knowledge management system is discussed, and a practical use case is presented. This use case is an information system that has been deployed within a banking environment. It combines common workflow technology with a new conception of the interaction among participants through the extension of existing definition languages.
Resumo:
Assets are interrelated in risk analysis methodologies for information systems promoted by international standards. This means that an attack on one asset can be propagated through the network and threaten an organization's most valuable assets. It is necessary to valuate all assets, the direct and indirect asset dependencies, as well as the probability of threats and the resulting asset degradation. These methodologies do not, however, consider uncertain valuations and use precise values on different scales, usually percentages. Linguistic terms are used by the experts to represent assets values, dependencies and frequency and asset degradation associated with possible threats. Computations are based on the trapezoidal fuzzy numbers associated with these linguistic terms.
Resumo:
The amount of genomic and proteomic data that is entered each day into databases and the experimental literature is outstripping the ability of experimental scientists to keep pace. While generic databases derived from automated curation efforts are useful, most biological scientists tend to focus on a class or family of molecules and their biological impact. Consequently, there is a need for molecular class-specific or other specialized databases. Such databases collect and organize data around a single topic or class of molecules. If curated well, such systems are extremely useful as they allow experimental scientists to obtain a large portion of the available data most relevant to their needs from a single source. We are involved in the development of two such databases with substantial pharmacological relevance. These are the GPCRDB and NucleaRDB information systems, which collect and disseminate data related to G protein-coupled receptors and intra-nuclear hormone receptors, respectively. The GPCRDB was a pilot project aimed at building a generic molecular class-specific database capable of dealing with highly heterogeneous data. A first version of the GPCRDB project has been completed and it is routinely used by thousands of scientists. The NucleaRDB was started recently as an application of the concept for the generalization of this technology. The GPCRDB is available via the WWW at http://www.gpcr.org/7tm/ and the NucleaRDB at http://www.receptors.org/NR/.
Resumo:
In the light of the growing interest raised by Information Systems Offshore Outsourcing both in the managerial world and in the academic arena, the present work carries out a revision of the research in this area. We have analysed 89 research articles on this topic published in 17 prestigious journals. The analysis deals with aspects such as research methodologies, level of analysis in the studies, data perspective, economic theories used or location of vendors and clients of these services; and it additionally identifies the most frequent topics in this field as well as the most prolific authors and countries. Although other reviews about the research in this area have been published, the present paper achieves a greater level of detail than previous works. The review of the literature in the area could have interesting implications not only for academics but also for business practice.
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
"December 1995."
Resumo:
Mode of access: Internet.
Resumo:
Land-surface processes include a broad class of models that operate at a landscape scale. Current modelling approaches tend to be specialised towards one type of process, yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management. This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as water flow, biochemical diffusion, and plant dispersal. Its theoretical development applies a Lagrangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique, called level set method, is implemented in a geographical information system (GIS). It fits with a field data model in GIS and is implemented as operators in map algebra. The paper describes an implementation of the level set methods in a map algebra programming language, called MapScript, and gives example program scripts for applications in ecology and hydrology.
Resumo:
As a means of benchmarking their position and assisting with anticipating an uncertain future, the identification of critical information systems (IS) management issues frameworks is becoming an increasingly important research task for both academics and industrialists. This paper provides a description and summary of previous work on identifying IS issues frameworks by reviewing 20 research investigations in terms of what they studied and how they were conducted. It also suggests some possible directions and methodologies for future research. The summary and suggestions for further work are applicable for issues framework research in the IS management field as well as in other business and management areas.
Resumo:
Information systems (IS) managers have become key senior executives for organising the IT resources for delivering support to businesses. Understanding characteristics of IS managers’ employment positions is hence an increasingly important topic in computer personnel research. An investigation in Singapore that included a job advertisement analysis, surveys and case studies was thus conducted to investigate such aspects. This article presents the findings of the job advertisement analysis concerning what kinds of IS managers the market is seeking and what are the basic conditions for such management positions. The literature in this area asserts that job advertisements represent firms’ wishes and the nature of the conditions required of different IS personnel. The results of this analysis therefore reflect a collective market perspective about the changing IS managerial workplace. The results of the analysis benefit both firms and IS employees in formulating personnel development plans and actions, and raise issues for further research.
Resumo:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.
Resumo:
This research studies the issue of using strategic information technology for improving organisational effectiveness. It analyses different academic approaches explaining the nature of information systems and the need organisations feel of developing strategic information systems planning processes, to improve organisational effectiveness. It chooses Managerial Cybernetics as the theoretical foundation supporting development of a "Strategic Information Systems Planning" Framework, and uses it for supporting the analysis of a documented story about the process lived by the Colombian President's Office, in 1990-1992. It argues that by analysing the situation through this new analysis framework we may enlighten some previously unclear situations lived, and not yet properly explained through other approaches to strategic information systems planning. The documented history explains the organisational context and strategic postures of the Colombian President's Office and the Colombian Public Sector, at that time, as well as some of the strategic information systems defined and developed. In particular it analyses a system developed jointly by the President's Office and the National Planning Department, for measuring results of the main national development programmes. Then, it reviews these situations, in the light of the new framework and presents the main findings of the exercise. Finally, it analyses the whole research exercise, the perceived usefulness of the chosen frameworks and tools to enlighten the real situations analysed that were not clear enough, and some open research paths to follow for future researchers interested in the issue.