876 resultados para Computer software - Development
Resumo:
Network connectivity offers the potential for a group of musicians to play together over the network. This paper describes a trans-Atlantic networked musical livecoding performance between Andrew Sorensen in Germany (at the Schloss Daghstuhl conference on Collaboration and Learning through Live Coding) and Ben Swift in San Jose (at YL/HCC) in September 2013. In this paper we describe the infrastructure developed to enable this performance.
Resumo:
The Strolls project was originally devised for a colleague in Finland and a cross cultural event event called AU goes to FI – the core concept is both re-experience of and presentation of the ‘everyday’ experience of life rather than the usual cultural icons. The project grew and was presented as a mash-up site with google maps (truna aka j.turner & David Browning). The site is now cob-webbed but some of the participant made strolls are archived here. The emphasis on the walk and taking of image stills (as opposed to the straightforward video) is based on a notion of partaking of the environment with technology. The process involves a strange and distinct embodiment as the maker must stop and choose each subsequent shot in order to build up the final animated sequence. The viewer becomes subtly involved in the maker’s decisions.
Resumo:
The worldwide installed base of enterprise resource planning (ERP) systems has increased rapidly over the past 10 years now comprising tens of thousands of installations in large- and medium-sized organizations and millions of licensed users. Similar to traditional information systems (IS), ERP systems must be maintained and upgraded. It is therefore not surprising that ERP maintenance activities have become the largest budget provision in the IS departments of many ERP-using organizations. Yet, there has been limited study of ERP maintenance activities. Are they simply instances of traditional software maintenance activities to which traditional software maintenance research findings can be generalized? Or are they fundamentally different, such that new research, specific to ERP maintenance, is required to help alleviate the ERP maintenance burden? This paper reports a case study of a large organization that implemented ERP (an SAP system) more than three years ago. From the case study and data collected, we observe the following distinctions of ERP maintenance: (1) the ERP-using organization, in addition to addressing internally originated change-requests, also implements maintenance introduced by the vendor; (2) requests for user-support concerning the ERP system behavior, function and training constitute a main part of ERP maintenance activity; and (3) similar to the in-house software environment, enhancement is the major maintenance activity in the ERP environment, encompassing almost 64% of the total change-request effort. In light of these and other findings, we ultimately: (1) propose a clear and precise definition of ERP maintenance; (2) conclude that ERP maintenance cannot be sufficiently described by existing software maintenance taxonomies; and (3) propose a benefits-oriented taxonomy, that better represents ERP maintenance activities. Three salient dimensions (for characterizing requests) incorporated in the proposed ERP maintenance taxonomy are: (1) who is the maintenance source? (2) why is it important to service the request? and (3) what––whether there is any impact of implementing the request on the installed module(s)?
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.
Resumo:
This paper presents a technique for the automated removal of noise from process execution logs. Noise is the result of data quality issues such as logging errors and manifests itself in the form of infrequent process behavior. The proposed technique generates an abstract representation of an event log as an automaton capturing the direct follows relations between event labels. This automaton is then pruned from arcs with low relative frequency and used to remove from the log those events not fitting the automaton, which are identified as outliers. The technique has been extensively evaluated on top of various auto- mated process discovery algorithms using both artificial logs with different levels of noise, as well as a variety of real-life logs. The results show that the technique significantly improves the quality of the discovered process model along fitness, appropriateness and simplicity, without negative effects on generalization. Further, the technique scales well to large and complex logs.
Resumo:
The output of a differential scanning fluorimetry (DSF) assay is a series of melt curves, which need to be interpreted to get value from the assay. An application that translates raw thermal melt curve data into more easily assimilated knowledge is described. This program, called “Meltdown,” conducts four main activities—control checks, curve normalization, outlier rejection, and melt temperature (Tm) estimation—and performs optimally in the presence of triplicate (or higher) sample data. The final output is a report that summarizes the results of a DSF experiment. The goal of Meltdown is not to replace human analysis of the raw fluorescence data but to provide a meaningful and comprehensive interpretation of the data to make this useful experimental technique accessible to inexperienced users, as well as providing a starting point for detailed analyses by more experienced users.
Resumo:
Self-authored video- where participants are in control of the creation of their own footage- is a means of creating innovative design material and including all members of a family in design activities. This paper describes our adaptation to this process called Self Authored Video Interviews (SAVIs) that we created and prototyped to better understand how families engage with situated technology in the home. We find the methodology produces unique insights into family dynamics in the home, uncovering assumptions and tensions unlikely to be discovered using more conventional methods. The paper outlines a number of challenges and opportunities associated with the methodology, specifically, maximising the value of the insights gathered by appealing to children to champion the cause, and how to counter perceptions of the lingering presence of researchers.
Resumo:
To remain competitive, many agricultural systems are now being run along business lines. Systems methodologies are being incorporated, and here evolutionary computation is a valuable tool for identifying more profitable or sustainable solutions. However, agricultural models typically pose some of the more challenging problems for optimisation. This chapter outlines these problems, and then presents a series of three case studies demonstrating how they can be overcome in practice. Firstly, increasingly complex models of Australian livestock enterprises show that evolutionary computation is the only viable optimisation method for these large and difficult problems. On-going research is taking a notably efficient and robust variant, differential evolution, out into real-world systems. Next, models of cropping systems in Australia demonstrate the challenge of dealing with competing objectives, namely maximising farm profit whilst minimising resource degradation. Pareto methods are used to illustrate this trade-off, and these results have proved to be most useful for farm managers in this industry. Finally, land-use planning in the Netherlands demonstrates the size and spatial complexity of real-world problems. Here, GIS-based optimisation techniques are integrated with Pareto methods, producing better solutions which were acceptable to the competing organizations. These three studies all show that evolutionary computation remains the only feasible method for the optimisation of large, complex agricultural problems. An extra benefit is that the resultant population of candidate solutions illustrates trade-offs, and this leads to more informed discussions and better education of the industry decision-makers.
Resumo:
Genetic mark–recapture requires efficient methods of uniquely identifying individuals. 'Shadows' (individuals with the same genotype at the selected loci) become more likely with increasing sample size, and bias harvest rate estimates. Finding loci is costly, but better loci reduce analysis costs and improve power. Optimal microsatellite panels minimize shadows, but panel design is a complex optimization process. locuseater and shadowboxer permit power and cost analysis of this process and automate some aspects, by simulating the entire experiment from panel design to harvest rate estimation.