188 resultados para Engineers in government
Resumo:
Often CAD models already exist for parts of a geometry being simulated using GEANT4. Direct import of these CAD models into GEANT4 however,may not be possible and complex components may be diffcult to define via other means. Solutions that allow for users to work around the limited support in the GEANT4 toolkit for loading predefined CAD geometries have been presented by others, however these solutions require intermediate file format conversion using commercial software. Here within we describe a technique that allows for CAD models to be directly loaded as geometry without the need for commercial software and intermediate file format conversion. Robustness of the interface was tested using a set of CAD models of various complexity; for the models used in testing, no import errors were reported and all geometry was found to be navigable by GEANT4. Funding source: Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
Many donors, particularly those contemplating a substantial donation, consider whether their donation will be deductible from their taxable income. This motivation is not lost on fundraisers who conduct appeals before the end of the taxation year to capitalise on such desires. The motivation is also not lost on Treasury analysts who perceive the tax deduction as “lost” revenue and wonder if the loss is “efficient” in economic terms. Would it be more efficient for the government to give grants to deserving organisations, rather than permitting donor directed gifts? Better still, what about contracts that lock in the use of the money for a government priority? What place does tax deduction play in influencing a donor to give? Does the size of the gift bear any relationship to the size of the tax deduction? Could an increased level of donations take up an increasing shortfall in government welfare and community infrastructure spending? Despite these questions being asked regularly, little has been rigorously established about the effect of taxation deductions on a donor’s gifts.
Resumo:
Management scholars and practitioners emphasize the importance of the size and diversity of a knowledge worker's social network. Constraints on knowledge workers’ time and energy suggest that more is not always better. Further, why and how larger networks contribute to valuable outcomes deserves further understanding. In this study, we offer hypotheses to shed insight on the question of the diminishing returns of large networks and the specific form of network diversity that may contribute to innovative performance among knowledge workers. We tested our hypotheses using data collected from 93 R&D engineers in a Sino-German automobile electronics company located in China. Study findings identified an inflection point, confirming our hypothesis that the size of the knowledge worker's egocentric network has an inverted U-shaped effect on job performance. We further demonstrate that network dispersion richness (the number of cohorts that the focal employee has connections to) rather than network dispersion evenness (equal distribution of ties across the cohorts) has more influence on the knowledge worker's job performance. Additionally, we found that the curvilinear effect of network size is fully mediated by network dispersion richness. Implications for future research on social networks in China and Western contexts are discussed.
Resumo:
In this issue of the Journal, the articles presented to the readers cover the breadth and depth of project management research and practice by addressing the relationship between project strategy and managing projects (Patanakul and Shenhar, “What Project Strategy Really Is: The Fundamental Building Block in Strategic Project Management”), on the need to align corporate strategy with program management (Ritson, Johansen, and Osborne, “Successful Programs Wanted: Exploring the Impact of Alignment”), identifying metrics to measure program success across project contexts (Shao, Müller, and Turner, “Measuring Program Success”), managing individual projects by identifying major risks in customer relationship management (CRM) implementation projects (Papadopoulos, Ojiako, Chipulu, and Lee, “The Criticality of Risk Factors in Customer Relationship Management Projects”), application of earned value management (EVM) to aerospace projects (Kwak and Anbari, “History, Practices, and Future of Earned Value Management in Government: Perspectives From NASA”), and capturing tacit knowledge of construction project professionals to determine the optimal construction site layout (Abdul-Rahman, Wang, and Siong, “Knowledge Acquisition Using Psychotherapy Technique for Critical Factors Influencing Construction Project Layout Planning”)...
Resumo:
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
BACKGROUND Collaborative and active learning have been clearly identified as ways students can engage in learning with each other and the academic staff. Traditional tier based lecture theatres and the didactic style they engender are not popular with students today as evidenced by the low attendance rates for lectures. Many universities are installing spaces designed with tables for group interaction with evolutions on spaces such as the TEAL (Technology Enabled Active Learning) (Massachusetts Institute of Technology, n.d.) and SCALE-UP (Student-Centred Activities for Large-Enrolment Undergraduate Programs) (North Carolina State University, n.d.) models. Technology advances in large screen computers and applications have also aided the move to these collaborative spaces. How well have universities structured learning using these spaces and how have students engaged with the content, technology, space and each other? This paper investigates the application of collaborative learning in such spaces for a cohort of 800+ first year engineers in the context of learning about and developing professional skills representative of engineering practice. PURPOSE To determine whether moving from tiers to tables enhances the student experience. Does utilising technology rich, activity based, collaborative learning spaces lead to positive experiences and active engagement of first year undergraduate engineering students? In developing learning methodology and approach in new learning spaces, what needs to change from a more traditional lecture and tutorial configuration? DESIGN/METHOD A post delivery review and analysis of outcomes was undertaken to determine how well students and tutors engaged with learning in new collaborative learning spaces. Data was gathered via focus group and survey of tutors, students survey and attendance observations. The authors considered the unit delivery approach along with observed and surveyed outcomes then conducted further review to produce the reported results. RESULTS Results indicate high participation in the collaborative sessions while the accompanying lectures were poorly attended. Students reported a high degree of satisfaction with the learning experience; however more investigation is required to determine the degree of improvement in retained learning outcomes. Survey feedback from tutors found that students engaged well in the activities during tutorials and there was an observed improvement in the quality of professional practice modelled by students during sessions. Student feedback confirmed the positive experiences in these collaborative learning spaces with 30% improvement in satisfaction ratings from previous years. CONCLUSIONS It is concluded that the right mix of space, technology and appropriate activities does engage students, improve participation and create a rich experience to facilitate potential for improved learning outcomes. The new Collaborative Teaching Spaces, together with integrated technology and tailored activities, has transformed the delivery of this unit and improved student satisfaction in tutorials significantly.
Resumo:
The aim of this work is to develop software that is capable of back projecting primary fluence images obtained from EPID measurements through phantom and patient geometries in order to calculate 3D dose distributions. In the first instance, we aim to develop a tool for pretreatment verification in IMRT. In our approach, a Geant4 application is used to back project primary fluence values from each EPID pixel towards the source. Each beam is considered to be polyenergetic, with a spectrum obtained from Monte Carlo calculations for the LINAC in question. At each step of the ray tracing process, the energy differential fluence is corrected for attenuation and beam divergence. Subsequently, the TERMA is calculated and accumulated to an energy differential 3D TERMA distribution. This distribution is then convolved with monoenergetic point spread kernels, thus generating energy differential 3D dose distributions. The resulting dose distributions are accumulated to yield the total dose distribution, which can then be used for pre-treatment verification of IMRT plans. Preliminary results were obtained for a test EPID image comprised of 100 9 100 pixels of unity fluence. Back projection of this field into a 30 cm9 30 cm 9 30 cm water phantom was performed, with TERMA distributions obtained in approximately 10 min (running on a single core of a 3 GHz processor). Point spread kernels for monoenergetic photons in water were calculated using a separate Geant4 application. Following convolution and summation, the resulting 3D dose distribution produced familiar build-up and penumbral features. In order to validate the dose model we will use EPID images recorded without any attenuating material in the beam for a number of MLC defined square fields. The dose distributions in water will be calculated and compared to TPS predictions.
Resumo:
Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.
Resumo:
The purpose of this paper is to identify goal conflicts – both actual and potential – between climate and social policies in government strategies in response to the growing significance of climate change as a socioecological issue (IPCC 2007). Both social and climate policies are political responses to long-term societal trends related to capitalist development, industrialisation, and urbanisation (Koch, 2012). Both modify these processes through regulation, fiscal transfers and other measures, thereby affecting conditions for the other. This means that there are fields of tensions and synergies between social policy and climate change policy. Exploring these tensions and synergies is an increasingly important task for navigating genuinely sustainable development. Gough et al (2008) highlight three potential synergies between social and climate change policies: First, income redistribution – a traditional concern of social policy – can facilitate use of and enhance efficiency of carbon pricing. A second area of synergy is housing, transport, urban policies and community development, which all have potential to crucially contribute towards reducing carbon emissions. Finally, climate change mitigation will require substantial and rapid shifts in producer and consumer behaviour. Land use planning policy is a critical bridge between climate change and social policy that provides a means to explore the tensions and synergies that are evolving within this context. This paper will focus on spatial planning as an opportunity to develop strategies to adapt to climate change, and reviews the challenges of such change. Land use and spatial planning involve the allocation of land and the design and control of spatial patterns. Spatial planning is identified as being one of the most effective means of adapting settlements in response to climate change (Hurlimann and March, 2012). It provides the instrumental framework for adaptation (Meyer, et al., 2010) and operates as both a mechanism to achieve adaptation and a forum to negotiate priorities surrounding adaptation (Davoudi, et al., 2009). The acknowledged role of spatial planning in adaptation however has not translated into comparably significant consideration in planning literature (Davoudi, et al., 2009; Hurlimann and March, 2012). The discourse on adaptation specifically through spatial planning is described as ‘missing’ and ‘subordinate’ in national adaptation plans (Greiving and Fleischhauer, 2012),‘underrepresented’ (Roggema, et al., 2012)and ‘limited and disparate’ in planning literature (Davoudi, et al., 2009). Hurlimann and March (2012) suggest this may be due to limited experiences of adaptation in developed nations while Roggema et al. (2012) and Crane and Landis (2010) suggest it is because climate change is a wicked problem involving an unfamiliar problem, various frames of understanding and uncertain solutions. The potential for goal conflicts within this policy forum seem to outweigh the synergies. Yet, spatial planning will be a critical policy tool in the future to both protect and adapt communities to climate change.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Introduction: The motivation for developing megavoltage (and kilovoltage) cone beam CT (MV CBCT) capabilities in the radiotherapy treatment room was primarily based on the need to improve patient set-up accuracy. There has recently been an interest in using the cone beam CT data for treatment planning. Accurate treatment planning, however, requires knowledge of the electron density of the tissues receiving radiation in order to calculate dose distributions. This is obtained from CT, utilising a conversion between CT number and electron density of various tissues. The use of MV CBCT has particular advantages compared to treatment planning with kilovoltage CT in the presence of high atomic number materials and requires the conversion of pixel values from the image sets to electron density. Therefore, a study was undertaken to characterise the pixel value to electron density relationship for the Siemens MV CBCT system, MVision, and determine the effect, if any, of differing the number of monitor units used for acquisition. If a significant difference with number of monitor units was seen then pixel value to ED conversions may be required for each of the clinical settings. The calibration of the MV CT images for electron density offers the possibility for a daily recalculation of the dose distribution and the introduction of new adaptive radiotherapy treatment strategies. Methods: A Gammex Electron Density CT Phantom was imaged with the MVCB CT system. The pixel value for each of the sixteen inserts, which ranged from 0.292 to 1.707 relative electron density to the background solid water, was determined by taking the mean value from within a region of interest centred on the insert, over 5 slices within the centre of the phantom. These results were averaged and plotted against the relative electron densities of each insert with a linear least squares fit was preformed. This procedure was performed for images acquired with 5, 8, 15 and 60 monitor units. Results: The linear relationship between MVCT pixel value and ED was demonstrated for all monitor unit settings and over a range of electron densities. The number of monitor units utilised was found to have no significant impact on this relationship. Discussion: It was found that the number of MU utilised does not significantly alter the pixel value obtained for different ED materials. However, to ensure the most accurate and reproducible MV to ED calibration, one MU setting should be chosen and used routinely. To ensure accuracy for the clinical situation this MU setting should correspond to that which is used clinically. If more than one MU setting is used clinically then an average of the CT values acquired with different numbers of MU could be utilized without loss in accuracy. Conclusions: No significant differences have been shown between the pixel value to ED conversion for the Siemens MV CT cone beam unit with change in monitor units. Thus as single conversion curve could be utilised for MV CT treatment planning. To fully utilise MV CT imaging for radiotherapy treatment planning further work will be undertaken to ensure all corrections have been made and dose calculations verified. These dose calculations may be either for treatment planning purposes or for reconstructing the delivered dose distribution from transit dosimetry measurements made using electronic portal imaging devices. This will potentially allow the cumulative dose distribution to be determined through the patient’s multi-fraction treatment and adaptive treatment strategies developed to optimize the tumour response.
Resumo:
The latest paradigm shift in government, termed Transformational Government, puts the citizen in the centre of attention. Including citizens in the design of online one-stop portals can help governmental organisations to become more customer focussed. This study describes the initial efforts of an Australian state government to develop an information architecture to structure the content of their future one-stop portal. Hereby, card sorting exercises have been conducted and analysed, utilising contemporary approaches found in academic and non-scientific literature. This paper describes the findings of the card sorting exercises in this particular case and discusses the suitability of the applied approaches in general. These are distinguished into non-statistical, statistical, and hybrid approaches. Thus, on the one hand, this paper contributes to academia by describing the application of different card sorting approaches and discussing their strengths and weaknesses. On the other hand, this paper contributes to practice by explaining the approach that has been taken by the authors’ research partner in order to develop a customer-focussed governmental one-stop portal. Thus, they provide decision support for practitioners with regard to different analysis methods that can be used to complement recent approaches in Transformational Government.
Resumo:
The first major national cultural policy in 19 years was unveiled by Minister for the Arts Simon Crean on 13 March 2013. Minister Crean has called it “a national cultural policy for the decade.” Uncharitable souls might ask “which decade?”, given that it was first promised soon after the election of the Rudd government in 2007. It is, however, a bold and forward-looking statement. In marked contrast to the limited detail provided by Communications Minister Stephen Conroy in support of the media reforms he recently announced, more than 150 pages Creative Australia outlines a comprehensive set of proposals for immediate action, and some aspirations for the longer term. Like the media reforms, however, it may not survive if there is a change in government in September.