185 resultados para Could computing
Resumo:
We investigated the collaboration of ten doctor-nurse pairs with a prototype digital telehealth stethoscope. Doctors could see and hear the patient but could not touch them or the stethoscope. The nurse in each pair controlled the stethoscope. For ethical reasons, an experimenter stood in for a patient. Each of the ten interactions was video recorded and analysed to understand the interaction and collaboration between the doctor and nurse. The video recordings were coded and transformed into maps of interaction that were analysed for patterns of activity. The analysis showed that as doctors and nurses became more experienced at using the telehealth stethoscope their collaboration was more effective. The main measure of effectiveness was the number of corrections in stethoscope placement required by the doctor. In early collaborations, the doctors gave many corrections. After several trials, each doctor and nurse had reduced corrections and all pairs reduced their corrections. The significance of this research is the identification of the qualities of effective collaboration in the use of the telehealth stethoscope and telehealth systems more generally.
Resumo:
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
In this paper, we describe a machine-translated parallel English corpus for the NTCIR Chinese, Japanese and Korean (CJK) Wikipedia collections. This document collection is named CJK2E Wikipedia XML corpus. The corpus could be used by the information retrieval research community and knowledge sharing in Wikipedia in many ways; for example, this corpus could be used for experimentations in cross-lingual information retrieval, cross-lingual link discovery, or omni-lingual information retrieval research. Furthermore, the translated CJK articles could be used to further expand the current coverage of the English Wikipedia.
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
Could mobile telephony be harnessed for development in Papua New Guinea (PNG)? Could mobile phones be utilised to enhance the security and prosperity of rural communities? Could mobile phones be a useful tool in the achievement of the PNG 2050 Vision targets? This paper is based on literature review around use of mobile phones in development in Asia, Africa, and the Caribbean. It also draws on discussions with key players in PNG, such as NGOs, UN agencies, donor partners, telecommunication companies and the government of PNG. Anticipated benefits of mobile phone availability have not been fully realised in rural areas of PNG to date due to pricing, difficulties with recharging handset batteries in communities which do not have mains electricity supply, and also concerns about negative social changes related to mobile telephony, for example parental stress over youth forming unsuitable relationships. Nonetheless, there are manifest possible ways for mobile phone technology to change user communication patterns positively regarding economic output. In sectors as diverse as health, education and law and justice, discussions are currently underway to establish how mobile phones could be used to increase service delivery, particularly to rural and marginal communities.
Resumo:
An optical system which performs the multiplication of binary numbers is described and proof-of-principle experiments are performed. The simultaneous generation of all partial products, optical regrouping of bit products, and optical carry look-ahead addition are novel features of the proposed scheme which takes advantage of the parallel operations capability of optical computers. The proposed processor uses liquid crystal light valves (LCLVs). By space-sharing the LCLVs one such system could function as an array of multipliers. Together with the optical carry look-ahead adders described, this would constitute an optical matrix-vector multiplier.
Resumo:
This book develops tools and techniques that will help urban residents gain access to urban computing. Metaphorically speaking, it is taking computing to the street by giving the general public – rather than just researchers and professionals – the power to leverage available city infrastructure and create solutions tailored to their individual needs. It brings together five chapters that are based on presentations given at the Street Computing Workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer-Human Interaction Conference (OZCHI 2009). This book focuses on applying urban informatics, urban and community sensing and open application programming interfaces (APIs) to the public space through the delivery of online services, on demand and in real time. It then offers a case study of how the city of Singapore has harnessed the potential of an online infrastructure so that residents and visitors can access services electronically. This book was published as a special issue of the Journal of Urban Technology, 19(2), 2012.
Resumo:
Background: Many people will consult a medical practitioner about lower bowel symptoms, and the demand for access to general practitioners (GPs) is growing. We do not know if people recognise the symptoms of lower bowel cancer when advising others about the need to consult a doctor. A structured vignette survey was conducted in Western Australia. Method: Participants were recruited from the waiting rooms at five general practices. Respondents were invited to complete self-administered questionnaires containing nine vignettes chosen at random from a pool of 64 based on six clinical variables. Twenty-seven vignettes described high-risk bowel cancer scenarios. Respondents were asked if they would recommend a medical consultation for the case described and whether they believed the scenario was a cancer presentation. Logistic regression was used to estimate the independent effects of each variable on the respondent's judgement. Two-hundred and sixty-eight completed responses were collected over eight weeks. Results: The majority (61%) of respondents were female, aged 40 years and older. A history of rectal bleeding, six weeks of symptoms, and weight loss independently increased the odds of recommending a consultation with a medical practitioner by a factor of 7.64, 4.11 and 1.86, respectively. Most cases that were identified as cancer (75.2%) would not be classified as such on current research evidence. Factors that predict recognition of cancer presentations include rectal bleeding, weight loss and diarrhoea.
Resumo:
The most powerful known primitive in public-key cryptography is undoubtedly elliptic curve pairings. Upon their introduction just over ten years ago the computation of pairings was far too slow for them to be considered a practical option. This resulted in a vast amount of research from many mathematicians and computer scientists around the globe aiming to improve this computation speed. From the use of modern results in algebraic and arithmetic geometry to the application of foundational number theory that dates back to the days of Gauss and Euler, cryptographic pairings have since experienced a great deal of improvement. As a result, what was an extremely expensive computation that took several minutes is now a high-speed operation that takes less than a millisecond. This thesis presents a range of optimisations to the state-of-the-art in cryptographic pairing computation. Both through extending prior techniques, and introducing several novel ideas of our own, our work has contributed to recordbreaking pairing implementations.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Background: Procedural sedation and analgesia (PSA) administered by nurses in the cardiac catheterisation laboratory (CCL) is unlikely to yield serious complications. However, the safety of this practice is dependent on timely identification and treatment of depressed respiratory function. Aim: Describe respiratory monitoring in the CCL. Methods: Retrospective medical record audit of adult patients who underwent a procedure in the CCLs of one private hospital in Brisbane during May and June 2010. An electronic database was used to identify subjects and an audit tool ensured data collection was standardised. Results: Nurses administered PSA during 172/473 (37%) procedures including coronary angiographies, percutaneous coronary interventions, electrophysiology studies, radiofrequency ablations, cardiac pacemakers, implantable cardioverter defibrillators, temporary pacing leads and peripheral vascular interventions. Oxygen saturations were recorded during 160/172 (23%) procedures, respiration rate was recorded during 17/172 (10%) procedures, use of oxygen supplementation was recorded during 40/172 (23%) procedures and 13/172 (7.5%; 95% CI=3.59–11.41%) patients experienced oxygen desaturation. Conclusion: Although oxygen saturation was routinely documented, nurses did not regularly record respiration observations. It is likely that surgical draping and the requirement to minimise radiation exposure interfered with nurses’ ability to observe respiration. Capnography could overcome these barriers to respiration assessment as its accurate measurement of exhaled carbon dioxide coupled with the easily interpretable waveform output it produces, which displays a breath-by-breath account of ventilation, enables identification of respiratory depression in real-time. Results of this audit emphasise the need to ascertain the clinical benefits associated with using capnography to assess ventilation during PSA in the CCL.