966 resultados para room set up
Resumo:
Aim: In 2013 QUT introduced the Medical Imaging Training Immersive Environment (MITIE) as a virtual reality (VR) platform that allowed students to practice general radiography. The system software has been expanded to now include C-Arm. The aim of this project was to investigate the use of this technology in the pedagogy of undergraduate medical imaging students who have limited to no experience in the use of the C-Arm clinically. Method: The Medical Imaging Training Immersive Environment (MITIE) application provides students with realistic and fully interactive 3D models of C-Arm equipment. As with VR initiatives in other health disciplines (1–2) the software mimics clinical practice as much as possible and uses 3D technology to enhance 3D spatial awareness and realism. The application allows students to set up and expose a virtual patient in a 3D environment as well as creating the resultant “image” for comparison with a gold standard. Automated feedback highlights ways for the student to improve their patient positioning, equipment setup or exposure factors. The students' equipment knowledge was tested using an on line assessment quiz and surveys provided information on the students' pre-clinical confidence scale, with post-clinical data comparisons. Ethical approval for the project was provided by the university ethics panel. Results: This study is currently under way and this paper will present analysis of initial student feedback relating to the perceived value of the application for confidence in a high risk environment (i.e. operating theatre) and related clinical skills development. Further in-depth evaluation is ongoing with full results to be presented. Conclusion: MITIE C-Arm has a development role to play in the pre-clinical skills training for Medical Radiation Science students. It will augment their theoretical understanding prior to their clinical experience. References 1. Bridge P, Appleyard R, Ward J, Phillips R, Beavis A. The development and evaluation of a virtual radiotherapy treatment machine using an immersive visualisation environment. Computers and Education 2007; 49(2): 481–494. 2. Gunn T, Berry C, Bridge P et al. 3D Virtual Radiography: Development and Initial Feedback. Paper presented at the 10th Annual Scientific Meeting of Medical Imaging and Radiation Therapy, March 2013 Hobart, Tasmania.
Resumo:
Employment on the basis of merit is the foundation of Australia’s equal opportunity legislation, beginning with the Affirmative Action (Equal Opportunity for Women) Act 1986, and continuing through the Equal Opportunity for Women in the Workplace Act 1999 to the Workplace Gender Equality Act 2012, all of which require organisations with more than 100 employees to produce an organisational program promoting employment equity for women (WGEA 2014a; Strachan, Burgess & Henderson 2007). The issue of merit was seen as critically important to the objectives of the original 1986 Act and the Affirmative Action Agency produced two monographs in 1988 written by Clare Burton: Redefining Merit (Burton 1988a) and Gender Bias in Job Evaluation (Burton 1988b) which provided practical advice. Added to this, in 1987 the Australian Government Publishing Service published Women’s Worth: Pay Equity and Job Evaluation in Australia (Burton, Hag & Thompson 1987). The equity programs set up under the 1986 legislation aimed to ‘eliminate discriminatory employment practices and to promote equal employment opportunities for women’ and this was ‘usually understood to mean that the merit principle forms the basis of appointment to positions and for promotion’ (Burton 1988a, p. 1).
Resumo:
This article considers the artistic and legal practices of Bangarra Dance Theatre in a case study of copyright law management in relation to Indigenous culture. It is grounded in the particular local experience, knowledge and understanding of copyright law displayed by the performing arts company. The first part considers the special relationship between Bangarra Dance Theatre and the Munyarrun Clan. It examines the contractual arrangements developed to recognise communal ownership. The next section examines the role of the artistic director and choreographer. It looks at the founder, Carole Johnson, and her successor, Stephen Page. The third part of the article focuses on the role of the composer, David Page. It examines his ambition to set up a Indigenous recording company, Nikinali. Part 4 focuses upon the role of the artistic designers. It looks at the contributions of artistic designers such as Fiona Foley. Part 5 deals with broadcasts of performances on television, film, and multi-media. Part 6 considers the collaborations of Bangarra Dance Theatre with the Australian Ballet, and the Sydney Organising Committee for the Olympic Games. The conclusion considers how Bangarra Dance Theatre has played a part ina general campaign to increase protection of Indigenous copyright law.
Resumo:
The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA-DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18-85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/).
Resumo:
Purpose The purpose of this paper is to explore the concept of service quality for settings where several customers are involved in the joint creation and consumption of a service. The approach is to provide first insights into the implications of a simultaneous multi‐customer integration on service quality. Design/methodology/approach This conceptual paper undertakes a thorough review of the relevant literature before developing a conceptual model regarding service co‐creation and service quality in customer groups. Findings Group service encounters must be set up carefully to account for the dynamics (social activity) in a customer group and skill set and capabilities (task activity) of each of the individual participants involved in a group service experience. Research limitations/implications Future research should undertake empirical studies to validate and/or modify the suggested model presented in this contribution. Practical implications Managers of service firms should be made aware of the implications and the underlying factors of group services in order to create and manage a group experience successfully. Particular attention should be given to those factors that can be influenced by service providers in managing encounters with multiple customers. Originality/value This article introduces a new conceptual approach for service encounters with groups of customers in a proposed service quality model. In particular, the paper focuses on integrating the impact of customers' co‐creation activities on service quality in a multiple‐actor model.
Resumo:
Monitoring pedestrian and cyclists movement is an important area of research in transport, crowd safety, urban design and human behaviour assessment areas. Media Access Control (MAC) address data has been recently used as potential information for extracting features from people’s movement. MAC addresses are unique identifiers of WiFi and Bluetooth wireless technologies in smart electronics devices such as mobile phones, laptops and tablets. The unique number of each WiFi and Bluetooth MAC address can be captured and stored by MAC address scanners. MAC addresses data in fact allows for unannounced, non-participatory, and tracking of people. The use of MAC data for tracking people has been focused recently for applying in mass events, shopping centres, airports, train stations etc. In terms of travel time estimation, setting up a scanner with a big value of antenna’s gain is usually recommended for highways and main roads to track vehicle’s movements, whereas big gains can have some drawbacks in case of pedestrian and cyclists. Pedestrian and cyclists mainly move in built distinctions and city pathways where there is significant noises from other fixed WiFi and Bluetooth. Big antenna’s gains will cover wide areas that results in scanning more samples from pedestrians and cyclists’ MAC device. However, anomalies (such fixed devices) may be captured that increase the complexity and processing time of data analysis. On the other hand, small gain antennas will have lesser anomalies in the data but at the cost of lower overall sample size of pedestrian and cyclist’s data. This paper studies the effect of antenna characteristics on MAC address data in terms of travel-time estimation for pedestrians and cyclists. The results of the empirical case study compare the effects of small and big antenna gains in order to suggest optimal set up for increasing the accuracy of pedestrians and cyclists’ travel-time estimation.
Resumo:
Back in 1995, Peter Drahos wrote a futuristic article called ‘Information feudalism in the information society’. It took the form of an imagined history of the information society in the year 2015. Drahos provided a pessimistic vision of the future, in which the information age was ruled by the private owners of intellectual property. He ended with the bleak, Hobbesian image: "It is unimaginable that the information society of the 21st century could be like this. And yet if abstract objects fall out of the intellectual commons and are enclosed by private owners, private, arbitrary, unchecked global power will become a part of life in the information society. A world in which seed rights, algorithms, DNA, and chemical formulas are owned by a few, a world in which information flows can be coordinated by information-media barons, might indeed be information feudalism (p. 222)." This science fiction assumed that a small number of states would dominate the emerging international regulatory order set up under the World Trade Organization. In Information Feudalism: Who Owns the Knowledge Economy?, Peter Drahos and his collaborator John Braithwaite reprise and expand upon the themes first developed in that article. The authors contend: "Information feudalism is a regime of property rights that is not economicallyefficient, and does not get the balance right between rewarding innovation and diffusing it. Like feudalism, it rewards guilds instead of inventive individual citizens. It makes democratic citizens trespassers on knowledge that should be the common heritage of humankind, their educational birthright. Ironically, information feudalism, by dismantling the publicness of knowledge, will eventually rob the knowledge economy of much of its productivity (p. 219)." Drahos and Braithwaite emphasise that the title Information Feudalism is not intended to be taken at face value by literal-minded readers, and crudely equated with medieval feudalism. Rather, the title serves as a suggestive metaphor. It designates the transfer of knowledge from the intellectual commons to private corporation under the regime of intellectual property.
Resumo:
This chapter considers the legal ramifications of Wikipedia, and other online media, such as the Encyclopedia of Life. Nathaniel Tkacz (2007) has observed: 'Wikipedia is an ideal entry-point from which to approach the shifting character of knowledge in contemporary society.' He observes: 'Scholarship on Wikipedia from computer science, history, philosophy, pedagogy and media studies has moved beyond speculation regarding its considerable potential, to the task of interpreting - and potentially intervening in - the significance of Wikipedia's impact' (Tkacz 2007). After an introduction, Part II considers the evolution and development of Wikipedia, and the legal troubles that have attended it. It also considers the establishment of rival online encyclopedia - such as Citizendium set up by Larry Sanger, the co-founder of Wikipedia; and Knol, the mysterious new project of Google. Part III explores the use of mass, collaborative authorship in the field of science. In particular, it looks at the development of the Encyclopedia of Life, which seeks to document the world's biodiversity. This chapter expresses concern that Wiki-based software had to develop in a largely hostile and inimical legal environment. It contends that copyright law and related fields of intellectual property need to be reformed in order better to accommodate users of copyright material (Rimmer 2007). This chapter makes a number of recommendations. First, there is a need to acknowledge and recognize forms of mass, collaborative production and consumption - not just individual authorship. Second, the view of a copyright 'work' and other subject matter as a complete and closed piece of cultural production also should be reconceptualised. Third, the defense of fair use should be expanded to accommodate a wide range of amateur, peer-to-peer production activities - not only in the United States, but in other jurisdictions as well. Fourth, the safe harbor protections accorded to Internet intermediaries, such as Wikipedia, should be strengthened. Fifth, there should be a defense in respect of the use of 'orphan works' - especially in cases of large-scale digitization. Sixth, the innovations of open source licensing should be expressly incorporated and entrenched within the formal framework of copyright laws. Finally, courts should craft judicial remedies to take into account concerns about political censorship and freedom of speech.
Resumo:
Not much has happened in Myanmar for most of the past 50 years; not much that is, for Western media, investors or even tourists to notice. Myanmar remained isolated for most of that time; an isolation that was partly self-imposed and, especially after the violent military crackdown against large-scale protests in 1988, externally reinforced via Western sanctions set up at the request of democracy leader Daw Aung San Suu Kyi after her National League for Democracy was denied power following its landside triumph in the 1990 election for a constituent assembly.
Resumo:
The climate in the Arctic is changing faster than anywhere else on earth. Poorly understood feedback processes relating to Arctic clouds and aerosol–cloud interactions contribute to a poor understanding of the present changes in the Arctic climate system, and also to a large spread in projections of future climate in the Arctic. The problem is exacerbated by the paucity of research-quality observations in the central Arctic. Improved formulations in climate models require such observations, which can only come from measurements in situ in this difficult-to-reach region with logistically demanding environmental conditions. The Arctic Summer Cloud Ocean Study (ASCOS) was the most extensive central Arctic Ocean expedition with an atmospheric focus during the International Polar Year (IPY) 2007–2008. ASCOS focused on the study of the formation and life cycle of low-level Arctic clouds. ASCOS departed from Longyearbyen on Svalbard on 2 August and returned on 9 September 2008. In transit into and out of the pack ice, four short research stations were undertaken in the Fram Strait: two in open water and two in the marginal ice zone. After traversing the pack ice northward, an ice camp was set up on 12 August at 87°21' N, 01°29' W and remained in operation through 1 September, drifting with the ice. During this time, extensive measurements were taken of atmospheric gas and particle chemistry and physics, mesoscale and boundary-layer meteorology, marine biology and chemistry, and upper ocean physics. ASCOS provides a unique interdisciplinary data set for development and testing of new hypotheses on cloud processes, their interactions with the sea ice and ocean and associated physical, chemical, and biological processes and interactions. For example, the first-ever quantitative observation of bubbles in Arctic leads, combined with the unique discovery of marine organic material, polymer gels with an origin in the ocean, inside cloud droplets suggests the possibility of primary marine organically derived cloud condensation nuclei in Arctic stratocumulus clouds. Direct observations of surface fluxes of aerosols could, however, not explain observed variability in aerosol concentrations, and the balance between local and remote aerosols sources remains open. Lack of cloud condensation nuclei (CCN) was at times a controlling factor in low-level cloud formation, and hence for the impact of clouds on the surface energy budget. ASCOS provided detailed measurements of the surface energy balance from late summer melt into the initial autumn freeze-up, and documented the effects of clouds and storms on the surface energy balance during this transition. In addition to such process-level studies, the unique, independent ASCOS data set can and is being used for validation of satellite retrievals, operational models, and reanalysis data sets.
Resumo:
Background The Australian National Hand Hygiene Initiative (NHHI) is a major patient safety programme co-ordinated by Hand Hygiene Australia (HHA) and funded by the Australian Commission for Safety and Quality in Health Care. The annual costs of running this programme need to be understood to know the cost-effectiveness of a decision to sustain it as part of health services. Aim To estimate the annual health services cost of running the NHHI; the set-up costs are excluded. Methods A health services perspective was adopted for the costing and collected data from the 50 largest public hospitals in Australia that implemented the initiative, covering all states and territories. The costs of HHA, the costs to the state-level infection-prevention groups, the costs incurred by each acute hospital, and the costs for additional alcohol-based hand rub are all included. Findings The programme cost AU$5.56 million each year (US$5.76, £3.63 million). Most of the cost is incurred at the hospital level (65%) and arose from the extra time taken for auditing hand hygiene compliance and doing education and training. On average, each infection control practitioner spent 5 h per week on the NHHI, and the running cost per annum to their hospital was approximately AU$120,000 in 2012 (US$124,000, £78,000). Conclusion Good estimates of the total costs of this programme are fundamental to understanding the cost-effectiveness of implementing the NHHI. This paper reports transparent costing methods, and the results include their uncertainty.
Resumo:
Unified communications as a service (UCaaS) can be regarded as a cost-effective model for on-demand delivery of unified communications services in the cloud. However, addressing security concerns has been seen as the biggest challenge to the adoption of IT services in the cloud. This study set up a cloud system via VMware suite to emulate hosting unified communications (UC), the integration of two or more real time communication systems, services in the cloud in a laboratory environment. An Internet Protocol Security (IPSec) gateway was also set up to support network-level security for UCaaS against possible security exposures. This study was aimed at analysis of an implementation of UCaaS over IPSec and evaluation of the latency of encrypted UC traffic while protecting that traffic. Our test results show no latency while IPSec is implemented with a G.711 audio codec. However, the performance of the G.722 audio codec with an IPSec implementation affects the overall performance of the UC server. These results give technical advice and guidance to those involved in security controls in UC security on premises as well as in the cloud.
Resumo:
The Australian Naturalistic Driving Study (ANDS), a ground-breaking study of Australian driver behaviour and performance, was officially launched on April 21st, 2015 at UNSW. The ANDS project will provide a realistic perspective on the causes of vehicle crashes and near miss crash events, along with the roles speeding, distraction and other factors have on such events. A total of 360 volunteer drivers across NSW and Victoria - 180 in NSW and 180 in Victoria - will be monitored by a Data Acquisition System (DAS) recording continuously for 4 months their driving behaviour using a suite of cameras and sensors. Participants’ driving behaviour (e.g. gaze), the behaviour of their vehicle (e.g. speed, lane position) and the behaviour of other road users with whom they interact in normal and safety-critical situations will be recorded. Planning of the ANDS commenced over two years ago in June 2013 when the Multi-Institutional Agreement for a grant supporting the equipment purchase and assembly phase was signed by parties involved in this large scale $4 million study (5 university accident research centres, 3 government regulators, 2 third party insurers and 2 industry partners). The program’s second development phase commenced a year later in June 2014 after a second grant was awarded. This paper presents an insider's view into that two year process leading up to the launch, and outlines issues that arose in the set-up phase of the study and how these were addressed. This information will be useful to other organisations considering setting up an NDS.
Resumo:
Distributed space time coding for wireless relay networks when the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set-up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. A new class of distributed space time block codes called Co-ordinate Interleaved Distributed Space-Time Codes (CIDSTC) are introduced where, in the first phase, the source transmits a T-length complex vector to all the relays;and in the second phase, at each relay, the in-phase and quadrature component vectors of the received complex vectors at the two antennas are interleaved and processed before forwarding them to the destination. Compared to the scheme proposed by Jing-Hassibi, for T >= 4R, while providing the same asymptotic diversity order of 2R, CIDSTC scheme is shown to provide asymptotic coding gain with the cost of negligible increase in the processing complexity at the relays. However, for moderate and large values of P, CIDSTC scheme is shown to provide more diversity than that of the scheme proposed by Jing-Hassibi. CIDSTCs are shown to be fully diverse provided the information symbols take value from an appropriate multidimensional signal set.
Resumo:
In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms