206 resultados para Computer software -- Development
Resumo:
The Strolls project was originally devised for a colleague in Finland and a cross cultural event event called AU goes to FI – the core concept is both re-experience of and presentation of the ‘everyday’ experience of life rather than the usual cultural icons. The project grew and was presented as a mash-up site with google maps (truna aka j.turner & David Browning). The site is now cob-webbed but some of the participant made strolls are archived here. The emphasis on the walk and taking of image stills (as opposed to the straightforward video) is based on a notion of partaking of the environment with technology. The process involves a strange and distinct embodiment as the maker must stop and choose each subsequent shot in order to build up the final animated sequence. The viewer becomes subtly involved in the maker’s decisions.
Resumo:
The worldwide installed base of enterprise resource planning (ERP) systems has increased rapidly over the past 10 years now comprising tens of thousands of installations in large- and medium-sized organizations and millions of licensed users. Similar to traditional information systems (IS), ERP systems must be maintained and upgraded. It is therefore not surprising that ERP maintenance activities have become the largest budget provision in the IS departments of many ERP-using organizations. Yet, there has been limited study of ERP maintenance activities. Are they simply instances of traditional software maintenance activities to which traditional software maintenance research findings can be generalized? Or are they fundamentally different, such that new research, specific to ERP maintenance, is required to help alleviate the ERP maintenance burden? This paper reports a case study of a large organization that implemented ERP (an SAP system) more than three years ago. From the case study and data collected, we observe the following distinctions of ERP maintenance: (1) the ERP-using organization, in addition to addressing internally originated change-requests, also implements maintenance introduced by the vendor; (2) requests for user-support concerning the ERP system behavior, function and training constitute a main part of ERP maintenance activity; and (3) similar to the in-house software environment, enhancement is the major maintenance activity in the ERP environment, encompassing almost 64% of the total change-request effort. In light of these and other findings, we ultimately: (1) propose a clear and precise definition of ERP maintenance; (2) conclude that ERP maintenance cannot be sufficiently described by existing software maintenance taxonomies; and (3) propose a benefits-oriented taxonomy, that better represents ERP maintenance activities. Three salient dimensions (for characterizing requests) incorporated in the proposed ERP maintenance taxonomy are: (1) who is the maintenance source? (2) why is it important to service the request? and (3) what––whether there is any impact of implementing the request on the installed module(s)?
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.
Resumo:
This paper presents a technique for the automated removal of noise from process execution logs. Noise is the result of data quality issues such as logging errors and manifests itself in the form of infrequent process behavior. The proposed technique generates an abstract representation of an event log as an automaton capturing the direct follows relations between event labels. This automaton is then pruned from arcs with low relative frequency and used to remove from the log those events not fitting the automaton, which are identified as outliers. The technique has been extensively evaluated on top of various auto- mated process discovery algorithms using both artificial logs with different levels of noise, as well as a variety of real-life logs. The results show that the technique significantly improves the quality of the discovered process model along fitness, appropriateness and simplicity, without negative effects on generalization. Further, the technique scales well to large and complex logs.
Resumo:
The output of a differential scanning fluorimetry (DSF) assay is a series of melt curves, which need to be interpreted to get value from the assay. An application that translates raw thermal melt curve data into more easily assimilated knowledge is described. This program, called “Meltdown,” conducts four main activities—control checks, curve normalization, outlier rejection, and melt temperature (Tm) estimation—and performs optimally in the presence of triplicate (or higher) sample data. The final output is a report that summarizes the results of a DSF experiment. The goal of Meltdown is not to replace human analysis of the raw fluorescence data but to provide a meaningful and comprehensive interpretation of the data to make this useful experimental technique accessible to inexperienced users, as well as providing a starting point for detailed analyses by more experienced users.
Resumo:
Self-authored video- where participants are in control of the creation of their own footage- is a means of creating innovative design material and including all members of a family in design activities. This paper describes our adaptation to this process called Self Authored Video Interviews (SAVIs) that we created and prototyped to better understand how families engage with situated technology in the home. We find the methodology produces unique insights into family dynamics in the home, uncovering assumptions and tensions unlikely to be discovered using more conventional methods. The paper outlines a number of challenges and opportunities associated with the methodology, specifically, maximising the value of the insights gathered by appealing to children to champion the cause, and how to counter perceptions of the lingering presence of researchers.
Resumo:
Introduction: Extreme heat events (both heat waves and extremely hot days) are increasing in frequency and duration globally and cause more deaths in Australia than any other extreme weather event. Numerous studies have demonstrated a link between extreme heat events and an increased risk of morbidity and death. In this study, the researchers sought to identify if extreme heat events in the Tasmanian population were associated with any changes in emergency department admissions to the Royal Hobart Hospital (RHH) for the period 2003-2010. Methods: Non-identifiable RHH emergency department data and climate data from the Australian Bureau of Meteorology were obtained for the period 2003-2010. Statistical analyses were conducted using the computer statistical computer software ‘R’ with a distributed lag non-linear model (DLNM) package used to fit a quassi-Poisson generalised linear regression model. Results: This study showed that RR of admission to RHH during 2003-2010 was significant over temperatures of 24 C with a lag effect lasting 12 days and main effect noted one day after the extreme heat event. Discussion: This study demonstrated that extreme heat events have a significant impact on public hospital admissions. Two limitations were identified: admissions data rather than presentations data were used and further analysis could be done to compare types of admissions and presentations between heat and non-heat events. Conclusion: With the impacts of climate change already being felt in Australia, public health organisations in Tasmania and the rest of Australia need to implement adaptation strategies to enhance resilience to protect the public from the adverse health effects of heat events and climate change.
Resumo:
Flood extent mapping is a basic tool for flood damage assessment, which can be done by digital classification techniques using satellite imageries, including the data recorded by radar and optical sensors. However, converting the data into the information we need is not a straightforward task. One of the great challenges involved in the data interpretation is to separate the permanent water bodies and flooding regions, including both the fully inundated areas and the wet areas where trees and houses are partly covered with water. This paper adopts the decision fusion technique to combine the mapping results from radar data and the NDVI data derived from optical data. An improved capacity in terms of identifying the permanent or semi-permanent water bodies from flood inundated areas has been achieved. Computer software tools Multispec and Matlab were used.
Resumo:
This article draws on the design and implementation of three mobile learning projects introduced by Flanagan in 2011, 2012 and 2014 engaging a total of 206 participants. The latest of these projects is highlighted in this article. Two other projects provide additional examples of innovative strategies to engage mobile and cloud systems describing how electronic and mobile technology can help facilitate teaching and learning, assessment for learning and assessment as learning, and support communities of practice. The second section explains the theoretical premise supporting the implementation of technology and promulgates a hermeneutic phenomenological approach. The third section discusses mobility, both in terms of the exploration of wearable technology in the prototypes developed as a result of the projects, and the affordances of mobility within pedagogy. Finally the quantitative and qualitative methods in place to evaluate m-learning are explained.