906 resultados para software quality
Resumo:
Purpose The aim was to assess the effects of a Tai Chi based program on health related quality of life (HR-QOL) in people with elevated blood glucose or diabetes who were not on medication for glucose control. Method 41 participants were randomly allocated to either a Tai Chi intervention group (N = 20) or a usual medical care control group (N = 21). The Tai Chi group involved 3 x 1.5 hour supervised and group-based training sessions per week for 12 weeks. Indicators of HR-QOL were assessed by self-report survey immediately prior to and after the intervention. Results There were significant improvements in favour of the Tai Chi group for the SF36 subscales of physical functioning (mean difference = 5.46, 95% CI = 1.35-9.57, P < 0.05), role physical (mean difference = 18.60, 95% CI = 2.16-35.05, P < 0.05), bodily pain (mean difference = 9.88, 95%CI = 2.06-17.69, P < 0.05) and vitality (mean difference = 9.96, 95% CI = 0.77-19.15, P < 0.05). Conclusions The findings show that this Tai Chi program improved indicators of HR-QOL including physical functioning, role physical, bodily pain and vitality in people with elevated blood glucose or diabetes who were not on diabetes medication.
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
This article proposes offence-specific guidelines for how prosecutorial discretion should be exercised in cases of voluntary euthanasia and assisted suicide. Similar guidelines have been produced in England and Wales but we consider them to be deficient in a number of respects, including that they lack a set of coherent guiding principles. In light of these concerns, we outline an approach to constructing alternative guidelines that begins with identifying three guiding principles that we argue are appropriate for this purpose: respect for autonomy, the need for high quality prosecutorial decision-making and the importance of public confidence in that decision-making.
Resumo:
The medical records of 273 patients 75 years and older were reviewed to evaluate quality of emergency department (ED) care through the use of quality indicators. One hundred fifty records contained evidence of an attempt to carry out a cognitive assessment. Documented evidence of cognitive impairment (CI) was reported in 54 cases. Of these patients, 30 had no documented evidence of an acute change in cognitive function from baseline; of 26 patients discharged home with preexisting CI (i.e., no acute change from baseline), 15 had no documented evidence of previous consideration of this issue by a health care provider; and 12 of 21 discharged patients who screened positive for cognitive issues for the first time were not referred for outpatient evaluation. These findings suggest that the majority of older adults in the ED are not receiving a formal cognitive assessment, and more than half with CI do not receive quality of care according to the quality indicators for geriatric emergency care. Recommendations for improvement are discussed.
Resumo:
The R statistical environment and language has demonstrated particular strengths for interactive development of statistical algorithms, as well as data modelling and visualisation. Its current implementation has an interpreter at its core which may result in a performance penalty in comparison to directly executing user algorithms in the native machine code of the host CPU. In contrast, the C++ language has no built-in visualisation capabilities, handling of linear algebra or even basic statistical algorithms; however, user programs are converted to high-performance machine code, ahead of execution. A new method avoids possible speed penalties in R by using the Rcpp extension package in conjunction with the Armadillo C++ matrix library. In addition to the inherent performance advantages of compiled code, Armadillo provides an easy-to-use template-based meta-programming framework, allowing the automatic pooling of several linear algebra operations into one, which in turn can lead to further speedups. With the aid of Rcpp and Armadillo, conversion of linear algebra centered algorithms from R to C++ becomes straightforward. The algorithms retains the overall structure as well as readability, all while maintaining a bidirectional link with the host R environment. Empirical timing comparisons of R and C++ implementations of a Kalman filtering algorithm indicate a speedup of several orders of magnitude.
Resumo:
This case-study exemplifies a ‘writing movement’, which is currently occurring in various parts of Australia through the support of social media. A concept emerging from the café scene in San Francisco, ‘Shut Up and Write!’ is a meetup group that brings writers together at a specific time and place to write side by side, thus making writing practice, social. This concept has been applied to the academic environment and our case-study explores the positive outcomes in two locations: RMIT University and QUT. We believe that this informal learning practice can be implemented to assist research students in developing academic skills. DESCRIPTION: Please describe your practice as a case study, including its context; challenge addressed; its aims; what it is; and how it supports creative practice PhD students or supervisors. Additional information may include: the outcomes; key factors or principles that contribute to its effectiveness; anticipated impact/evidence of impact. Research students spend the majority of their time outside of formal learning environments. Doctoral candidates enter their degree with a range of experience, knowledge and needs, making it difficult to provide writing assistance in a structured manner. Using a less structured approach to provide writing assistance has been trialled with promising results (Boud, Cohen, & Sampson, 2001; Stracke, 2010; Devenish et al, 2009). Although, semi structured approaches have been developed and examined, informal learning opportunities have received minimal attention. The primary difference of Shut Up and Write! to other writing practices, is that individuals do not engage in any structured activity and they do not share the outcomes of the writing. The purpose of Shut Up and Write! is to transform writing practice from a solitary experience, to a social one. Shut Up and Write! typically takes place outside of formal learning environments, in public spaces such as a café. The structure of Shut Up and Write! sessions is simple: participants meet at a specific time and place, chat for a few minutes, then they Shut Up and Write for a predetermined amount of time. Critical to the success of the sessions, is that there is no critiquing of the writing, and there is no competition or formal exercises. Our case-study examines the experience of two meetup groups at RMIT University and QUT through narrative accounts from participants. These accounts reveal that participants have learned: • Writing/productivity techniques; • Social/cloud software; • Aspects of the PhD; and • ‘Mundane’ dimensions of academic practice. In addition to this, activities such as Shut Up and Write! promote peer to peer bonding, knowledge exchange, and informal learning within the higher degree research experience. This case-study extends the initial work presented by the authors in collaboration with Dr. Inger Mewburn at QPR2012 – Quality in Postgraduate Research Conference, 2012.
Resumo:
In this paper, the author describes recent developments in the assessment of research activity and publication in Australia. Of particular interest to readers will be the move to rank academic journals. Educational Philosophy and Theory (EPAT) received the highest possible ranking, however, the process is far from complete. Some implications for the field, for this journal and particularly, for the educational foundations are discussed.
Resumo:
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Resumo:
Whole-body computer control interfaces present new opportunities to engage children with games for learning. Stomp is a suite of educational games that use such a technology, allowing young children to use their whole body to interact with a digital environment projected on the floor. To maximise the effectiveness of this technology, tenets of self-determination theory (SDT) are applied to the design of Stomp experiences. By meeting user needs for competence, autonomy, and relatedness our aim is to increase children's engagement with the Stomp learning platform. Analysis of Stomp's design suggests that these tenets are met. Observations from a case study of Stomp being used by young children show that they were highly engaged and motivated by Stomp. This analysis demonstrates that continued application of SDT to Stomp will further enhance user engagement. It also is suggested that SDT, when applied more widely to other whole-body multi-user interfaces, could instil similar positive effects.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
Online travel reviews are emerging as a powerful source of information affecting tourists' pre-purchase evaluation of a hotel organization. This trend has highlighted the need for a greater understanding of the impact of online reviews on consumer attitudes and behaviors. In view of this need, we investigate the influence of online hotel reviews on consumers' attributions of service quality and firms' ability to control service delivery. An experimental design was used to examine the effects of four independent variables: framing; valence; ratings; and target. The results suggest that in reviews evaluating a hotel, remarks related to core services are more likely to induce positive service quality attributions. Recent reviews affect customers' attributions of controllability for service delivery, with negative reviews exerting an unfavorable influence on consumers' perceptions. The findings highlight the importance of managing the core service and the need for managers to act promptly in addressing customer service problems.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
Smartphones are steadily gaining popularity, creating new application areas as their capabilities increase in terms of computational power, sensors and communication. Emerging new features of mobile devices give opportunity to new threats. Android is one of the newer operating systems targeting smartphones. While being based on a Linux kernel, Android has unique properties and specific limitations due to its mobile nature. This makes it harder to detect and react upon malware attacks if using conventional techniques. In this paper, we propose an Android Application Sandbox (AASandbox) which is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications. Static analysis scans the software for malicious patterns without installing it. Dynamic analysis executes the application in a fully isolated environment, i.e. sandbox, which intervenes and logs low-level interactions with the system for further analysis. Both the sandbox and the detection algorithms can be deployed in the cloud, providing a fast and distributed detection of suspicious software in a mobile software store akin to Google's Android Market. Additionally, AASandbox might be used to improve the efficiency of classical anti-virus applications available for the Android operating system.
Resumo:
This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis, where risks are detected during process execution. The approach has been realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of negative process states (faults) to eventuate. Both historical and current process execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a business process management system to prompt the results to process administrators who may take remedial actions. The proposed architecture has been implemented on top of the YAWL system, and evaluated through performance measurements and usability tests with students. The results show that risk conditions can be computed efficiently and that the approach is perceived as useful by the participants in the tests.