876 resultados para software quality assurance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study explores academic perceptions of organizational capability and culture following a project to develop a quality assurance of learning program in a business school. In the project a community of practice structure was established to include academics in the development of an embedded, direct assurance of learning program affecting more than 5000 undergraduate students and 250 academics from nine different disciplines across four discipline based departments. The primary outcome from the newly developed and implemented assurance of learning program was the five year accreditation of the business school’s programs by two international accrediting bodies, EQUIS and AACSB. This study explores a different outcome, namely perceptions of organizational culture and individual capabilities as academics worked together in teaching teams and communities. This study uses a survey and interviews with academics involved, through a retrospective panel design consisting of an experimental group and a control group. Results offer insights into communities of practice as a means of encouraging new individual and organizational capability and strategic culture adaptation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper considers the conditions that are necessary at system and local levels for teacher assessment to be valid, reliable and rigorous. With sustainable assessment cultures as a goal, the paper examines how education systems can support local level efforts for quality learning and dependable teacher assessment. This is achieved through discussion of relevant research and consideration of a case study involving an evaluation of a cross-sectoral approach to promoting confidence in school-based assessment in Queensland, Australia. Building on the reported case study, essential characteristics for developing sustainable assessment cultures are presented, including: leadership in learning; alignment of curriculum, pedagogy and assessment; the design of quality assessment tasks and accompanying standards, and evidence-based judgement and moderation. Taken together, these elements constitute a new framework for building assessment capabilities and promoting quality assurance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Refactoring is a common approach to producing better quality software. Its impact on many software quality properties, including reusability, maintainability and performance, has been studied and measured extensively. However, its impact on the information security of programs has received relatively little attention. In this work, we assess the impact of a number of the most common code-level refactoring rules on data security, using security metrics that are capable of measuring security from the viewpoint of potential information flow. The metrics are calculated for a given Java program using a static analysis tool we have developed to automatically analyse compiled Java bytecode. We ran our Java code analyser on various programs which were refactored according to each rule. New values of the metrics for the refactored programs then confirmed that the code changes had a measurable effect on information security.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There remains a lack of published empirical data on the substantive outcomes of higher learning and the establishment of quality processes for determining them. Studies that do exist are nationally focused with available rankings of institutions reflecting neither the quality of teaching and learning nor the diversity of institutions. This paper describes two studies in which Associate Deans from Australian higher education institutions and focus groups of management and academics identify current issues and practices in the design, development and implementation of processes for assuring the quality of learning and teaching. Results indicate that developing a perspective on graduate attributes and mapping assessments to measure outcomes across an entire program necessitates knowledge creation and new inclusive processes. Common elements supporting consistently superior outcomes included: inclusivity; embedded graduate attributes; consistent and appropriate assessment; digital collection mechanisms; and systematic analysis of outcomes used in program review. Quality measures for assuring learning are proliferating nationally and changing the processes, systems and culture of higher education as a result.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the current business world which companies’ competition is very compact in the business arena, quality in manufacturing and providing products and services can be considered as a means of seeking excellence and success of companies in this competition arena. Entering the era of e-commerce and emergence of new production systems and new organizational structures, traditional management and quality assurance systems have been challenged. Consequently, quality information system has been gained a special seat as one of the new tools of quality management. In this paper, quality information system has been studied with a review of the literature of the quality information system, and the role and position of quality Information System (QIS) among other information systems of a organization is investigated. The quality Information system models are analyzed and by analyzing and assessing presented models in quality information system a conceptual and hierarchical model of quality information system is suggested and studied. As a case study the hierarchical model of quality information system is developed by evaluating hierarchical models presented in the field of quality information system based on the Shetabkar Co.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There have now been two decades of rhetoric on the need for culturally and contextually appropriate perspectives in international education. However, the extent to which courses, provision and pedagogy have truly reflected differences in cultural characteristics and learning preferences is still open to question. Little attention has been paid to these matters in quality assurance frameworks. This chapter discusses these issues and draws upon Hofstede’s cultural dimensions framework and studies into Asian pedagogy and uses of educational technology. It proposes a benchmark and performance indicators for assuring cultural, contextual, educational and technological appropriateness in the provision of transnational distance education in Asia by Australian universities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Assurance of learning (AOL) is a quality enhancement and quality assurance process used in higher education. It involves a process of determining programme learning outcomes and standards, and systematically gathering evidence to measure students' performance on these. The systematic assessment of whole-of-programme outcomes provides a basis for curriculum development and management, continuous improvement, and accreditation. To better understand how AOL processes operate, a national study of university practices across one discipline area, business and management, was undertaken. To solicit data on AOL practice, interviews were undertaken with a sample of business school representatives (n = 25). Two key processes emerged: (1) mapping of graduate attributes and (2) collection of assurance data. External drivers such as professional accreditation and government legislation were the primary reasons for undertaking AOL outcomes but intrinsic motivators in relation to continuous improvement were also evident. The facilitation of academic commitment was achieved through an embedded approach to AOL by the majority of universities in the study. A sustainable and inclusive process of AOL was seen to support wider stakeholder engagement in the development of higher education learning outcomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose To study the quality in higher education in Cambodia and explore the potential factors leading to quality in Cambodian higher education. Design/methodology/approach Five main factors that were deemed relevant in providing quality in Cambodian higher education were proposed: academic curriculum and extra-curricular activities, teachers' qualification and methods, funding and tuition, school facilities, and interactive network. These five propositions were used to compare Shu-Te University, Taiwan with the top five universities in Cambodia. The data came in the forms of questionnaire and desk research. Descriptive analytical approach is then carried out to describe these five factors. Findings Only 6 per cent of lecturers hold PhD degree and about 85 per cent never published any papers; some private universities charge as low as USD200 per academic year, there is almost no donation from international organizations, and annual government funding on higher education sector nationwide in 2005 was only about USD3.67 million; even though there is a library at each university, books, study materials etc. are not up-to-date and inadequate; 90 per cent of the lecturers never have technical discussion or meeting and about 60 per cent of students felt that their teachers did not have time for them to consult with. Originality/value A useful insight was gained into the perceived importance of quality in higher education that can stimulate debate and discussion on the role of government in building the standard quality in higher education. Also, the findings from this research can assist in the development of a framework of developing human resource.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This case study applied Weick's (1979) notion of sensemaking to support timely quality doctoral completion. Taking a socio-cultural perspective the paper explored how drivers can be applied to inform better fit (Durham, 1991). Global research themes, including growth in student numbers, timely completion and generation and distribution of research outcomes, are considered. It is argued that accessible and interactive web interfaces should be informed by quality assurance measures and key performance indicators. The contribution made is a better understanding of how phenomena and contexts can be applied to generate quality management of research training environments and research outcomes in universities.