959 resultados para Computer software maintenance
Resumo:
Increasingly, software is no longer developed as a single system, but rather as a smart combination of so-called software services. Each of these provides an independent, specific and relatively small piece of functionality, which is typically accessible through the Internet from internal or external service providers. There are no standards or models that describe the sourcing process of these software based services (SBS). The authors identify the sourcing requirements for SBS and associate the key characteristics of SBS (with the sourcing requirements introduced). Furthermore, this paper investigates the sourcing of SBS with the related works in the field of classical procurement, business process outsourcing, and information systems sourcing. Based on the analysis, the authors conclude that the direct adoption of these approaches for SBS is not feasible and new approaches are required for sourcing SBS.
Resumo:
"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment
Resumo:
Linking real-time schedulability directly to the Quality of Control (QoC), the ultimate goal of a control system, a hierarchical feedback QoC management framework with the Fixed Priority (FP) and the Earliest-Deadline-First (EDF) policies as plug-ins is proposed in this paper for real-time control systems with multiple control tasks. It uses a task decomposition model for continuous QoC evaluation even in overload conditions, and then employs heuristic rules to adjust the period of each of the control tasks for QoC improvement. If the total requested workload exceeds the desired value, global adaptation of control periods is triggered for workload maintenance. A sufficient stability condition is derived for a class of control systems with delay and period switching of the heuristic rules. Examples are given to demonstrate the proposed approach.
Resumo:
The aim of this project was to implement a just-in-time hints help system into a real time strategy (RTS) computer game that would deliver information to the user at the time that it would be of the most benefit. The goal of this help system is to improve the user’s learning in terms of their rate of learning, retention and avoidance of stagnation. The first stage of this project was implementing a computer game to incorporate four different types of skill that the user must acquire, namely motor, perceptual, declarative knowledge and strategic. Subsequently, the just-in-time hints help system was incorporated into the game to assess the user’s knowledge and deliver hints accordingly. The final stage of the project was to test the effectiveness of this help system by conducting two phases of testing. The goal of this testing was to demonstrate an increase in the user’s assessment of the helpfulness of the system from phase one to phase two. The results of this testing showed that there was no significant difference in the user’s responses in the two phases. However, when the results were analysed with respect to several categories of hints that were identified, it became apparent that patterns in the data were beginning to emerge. The conclusions of the project were that further testing with a larger sample size would be required to provide more reliable results and that factors such as the user’s skill level and different types of goals should be taken into account.
Resumo:
With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.
Resumo:
Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance.
Resumo:
It is acknowledged around the world that many university students struggle with learning to program (McCracken et al., 2001; McGettrick et al., 2005). In this paper, we describe how we have developed a research programme to systematically study and incrementally improve our teaching. We have adopted a research programme with three elements: (1) a theory that provides an organising framework for defining the type of phenomena and data of interest, (2) data on how the class as a whole performs on formative assessment tasks that are framed from within the organising framework, and (3) data from one-on-one think aloud sessions, to establish why students struggle with some of those in-class formative assessment tasks. We teach introductory computer programming, but this three-element structure of our research is applicable to many areas of engineering education research.
Resumo:
Previously, expected satiety (ES) has been measured using software and two-dimensional pictures presented on a computer screen. In this context, ES is an excellent predictor of self-selected portions, when quantified using similar images and similar software. In the present study we sought to establish the veracity of ES as a predictor of behaviours associated with real foods. Participants (N = 30) used computer software to assess their ES and ideal portion of three familiar foods. A real bowl of one food (pasta and sauce) was then presented and participants self-selected an ideal portion size. They then consumed the portion ad libitum. Additional measures of appetite, expected and actual liking, novelty, and reward, were also taken. Importantly, our screen-based measures of expected satiety and ideal portion size were both significantly related to intake (p < .05). By contrast, measures of liking were relatively poor predictors (p > .05). In addition, consistent with previous studies, the majority (90%) of participants engaged in plate cleaning. Of these, 29.6% consumed more when prompted by the experimenter. Together, these findings further validate the use of screen-based measures to explore determinants of portion-size selection and energy intake in humans.
Resumo:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
Resumo:
Customer Relationship Management (CRM) packaged software has become a key contributor to attempts at aligning business and IT strategies in recent years. Throughout the 1990s there was, in many organisations strategies, a shift from the need to manage transactions and toward relationship management. Where Enterprise Resource Planning packages dominated the management of transactions era, CRM packages lead in regard to relationships. At present, balanced views of CRM packages are scantly presented instead relying on vendor rhetoric. This paper uses case study research to analyse some of the issues associated with CRM packages. These issues include the limitations of CRM packages, the need for a relationship orientation and the problems of a dominant management perspective of CRM. It is suggested that these issues could be more readily accommodated by organisational detachment from beliefs in IT as utopia, consideration of prior IS theory and practice and a more informed approach to CRM package selection.
Resumo:
The project renewed the Breedcow and Dynama software making it compatible with modern computer operating systems and platforms. Enhancements were also made to the linkages between the individual programs and their operation. The suite of programs is a critical component of the skill set required to make soundly based plans and production choices in the north Australian beef industry.
Resumo:
Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.
Resumo:
A number of companies are trying to migrate large monolithic software systems to Service Oriented Architectures. A common approach to do this is to first identify and describe desired services (i.e., create a model), and then to locate portions of code within the existing system that implement the described services. In this paper we describe a detailed case study we undertook to match a model to an open-source business application. We describe the systematic methodology we used, the results of the exercise, as well as several observations that throw light on the nature of this problem. We also suggest and validate heuristics that are likely to be useful in partially automating the process of matching service descriptions to implementations.
Resumo:
Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.