947 resultados para Grid Computing
Resumo:
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
The behaviour of single installations of solar energy systems is well understood; however, what happens at an aggregated location, such as a distribution substation, when output of groups of installations cumulate is not so well understood. This paper considers groups of installations attached to distributions substations on which the load is primarily commercial and industrial. Agent-based modelling has been used to model the physical electrical distribution system and the behaviour of equipment outputs towards the consumer end of the network. The paper reports the approach used to simulate both the electricity consumption of groups of consumers and the output of solar systems subject to weather variability with the inclusion of cloud data from the Bureau of Meteorology (BOM). The data sets currently used are for Townsville, North Queensland. The initial characteristics that indicate whether solar installations are cost effective from an electricity distribution perspective are discussed.
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”
Resumo:
In the increasingly competitive Australian tertiary education market, a consumer orientation is essential. This is particularly so for small regional campuses competing with larger universities in the state capitals. Campus management need to carefully monitor both the perceptions of prospective students within the catchment area, and the (dis)satisfaction levels of current students. This study reports the results of an exploratory investigation into the perceptions held of a regional campus, using two techniques that have arguably been underutilised in the education marketing literature. Repertory Grid Analysis, a technique developed almost fifty years ago, was used to identify attributes deemed salient to year 12 high school students at the time they were applying for university places. Importance-performance analysis (IPA), developed three decades ago, was then used to identify attributes that were determinant for a new cohort of first year undergraduate students. The paper concludes that group applications of Repertory Grid offer education market researchers a useful means for identifying attributes used by high school students to differentiate universities, and that IPA is a useful technique for guiding promotional decision making. In this case, the two techniques provided a quick, economical and effective snapshot of market perceptions, which can be used as a foundation for the development of an ongoing market research programme.
Resumo:
Series reactors are used in distribution grids to reduce the short-circuit fault level. Some of the disadvantages of the application of these devices are the voltage drop produced across the reactor and the steep front rise of the transient recovery voltage (TRV), which generally exceeds the rating of the associated circuit breaker. Simulations were performed to compare the characteristics of a saturated core High-Temperature Superconducting Fault Current Limiter (HTS FCL) and a series reactor. The design of the HTS FCL was optimized using the evolutionary algorithm. The resulting Pareto frontier curve of optimum solution is presented in this paper. The results show that the steady-state impedance of an HTS FCL is significantly lower than that of a series reactor for the same level of fault current limiting. Tests performed on a prototype 11 kV HTS FCL confirm the theoretical results. The respective transient recovery voltages (TRV) of the HTS FCL and an air core reactor of comparable fault current limiting capability are also determined. The results show that the saturated core HTS FCL has a significantly lower effect on the rate of rise of the circuit breaker TRV as compared to the air core reactor. The simulations results are validated with shortcircuit test results.
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
We present and analyze several gaze-based graphical password schemes based on recall and cued-recall of grid points; eye-trackers are used to record user's gazes, which can prevent shoulder-surfing and may be suitable for users with disabilities. Our 22-subject study observes that success rate and entry time for the grid-based schemes we consider are comparable to other gaze-based graphical password schemes. We propose the first password security metrics suitable for analysis of graphical grid passwords and provide an in-depth security analysis of user-generated passwords from our study, observing that, on several metrics, user-generated graphical grid passwords are substantially weaker than uniformly random passwords, despite our attempts at designing schemes to improve quality of user-generated passwords.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
Voltage rise is one of the main factors which limits the capacity of Low Voltage (LV) network to accommodate more Renewable Energy (RE) sources. This paper proposes a robust and effective approach to coordinate customers’ resources and manage voltage rise in residential LV networks. PV is considered as the customer RE source. The suggested coordination approach in this paper includes both localized control strategy, based on local measurement, and distributed control strategy based on consensus algorithm. This approach can completely avoid maximum permissible voltage limit violation. A typical residential LV network is used as the case study where the simulated results are shown to verify the effectiveness of the proposed approach.
Resumo:
This book develops tools and techniques that will help urban residents gain access to urban computing. Metaphorically speaking, it is taking computing to the street by giving the general public – rather than just researchers and professionals – the power to leverage available city infrastructure and create solutions tailored to their individual needs. It brings together five chapters that are based on presentations given at the Street Computing Workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer-Human Interaction Conference (OZCHI 2009). This book focuses on applying urban informatics, urban and community sensing and open application programming interfaces (APIs) to the public space through the delivery of online services, on demand and in real time. It then offers a case study of how the city of Singapore has harnessed the potential of an online infrastructure so that residents and visitors can access services electronically. This book was published as a special issue of the Journal of Urban Technology, 19(2), 2012.
Resumo:
Every day we are confronted with social interactions with other people. Our social life is characterised by norms that manifest as attitudinal and behavioural uniformities among people. With greater awareness about our social context, we can interact more efficiently. Any theory or model of human interaction that fails to include social concepts could be suggested to lack a critical element. This paper identifies the construct of social concepts that need to be supported by future context-aw are systems from an interdisciplinary perspective. It discusses the limitations of existing context-aware systems to support social psychology theories related to the identification and membership of social groups. We argue that social norms are among the core modelling concepts that future context-aware systems need to capture with the view to support and enhance social interactions. A detailed summary of social psychology theory relevant to social computing is given, followed by a formal definition of the process with each such norm advertised and acquired. The social concepts identified in this paper could be used to simulate agent interactions imbued with social norms or use ICT to facilitate, assist, enhance or understand social interactions. They also could be used in virtual communities modelling where the social awareness of a community as well as the process of joining and exiting a community are important.
Resumo:
The most powerful known primitive in public-key cryptography is undoubtedly elliptic curve pairings. Upon their introduction just over ten years ago the computation of pairings was far too slow for them to be considered a practical option. This resulted in a vast amount of research from many mathematicians and computer scientists around the globe aiming to improve this computation speed. From the use of modern results in algebraic and arithmetic geometry to the application of foundational number theory that dates back to the days of Gauss and Euler, cryptographic pairings have since experienced a great deal of improvement. As a result, what was an extremely expensive computation that took several minutes is now a high-speed operation that takes less than a millisecond. This thesis presents a range of optimisations to the state-of-the-art in cryptographic pairing computation. Both through extending prior techniques, and introducing several novel ideas of our own, our work has contributed to recordbreaking pairing implementations.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.