912 resultados para pacs: neural computing technologies
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
Over the past few decades, biodiesel produced from oilseed crops and animal fat is receiving much attention as a renewable and sustainable alternative for automobile engine fuels, and particularly petroleum diesel. However, current biodiesel production is heavily dependent on edible oil feedstocks which are unlikely to be sustainable in the longer term due to the rising food prices and the concerns about automobile engine durability. Therefore, there is an urgent need for researchers to identify and develop sustainable biodiesel feedstocks which overcome the disadvantages of current ones. On the other hand, artificial neural network (ANN) modeling has been successfully used in recent years to gain new knowledge in various disciplines. The main goal of this article is to review recent literatures and assess the state of the art on the use of ANN as a modeling tool for future generation biodiesel feedstocks. Biodiesel feedstocks, production processes, chemical compositions, standards, physio-chemical properties and in-use performance are discussed. Limitations of current biodiesel feedstocks over future generation biodiesel feedstock have been identified. The application of ANN in modeling key biodiesel quality parameters and combustion performance in automobile engines is also discussed. This review has determined that ANN modeling has a high potential to contribute to the development of renewable energy systems by accelerating biodiesel research.
Resumo:
Criminal intelligence is an area of expertise highly sought-after internationally and within a variety of justice-related professions; however, producing university graduates with the requisite professional knowledge, as well as analytical, organisational and technical skills presents a pedagogical and technical challenge to university educators. The situation becomes even more challenging when students are undertaking their studies by distance education. This best practice session showcases the design of an online undergraduate unit for final year justice students which uses an evolving real-time criminal scenario as the focus of authentic learning activities in order to prepare students for graduate roles within the criminal intelligence and justice professions. Within the unit, students take on the role of criminal intelligence analysts, applying relevant theories, models and strategies to solve a complex but realistic crime and complete briefings and documentation to industry standards as their major summative assessment task. The session will demonstrate how the design of the online unit corresponds to authentic learning principles, and will specifically map the elements of the unit design to Herrington & Oliver’s instructional design framework for authentic learning (2000; Herrington & Herrington 2006). The session will show how a range of technologies was used to create a rich learning experience for students that could be easily maintained over multiple unit iterations without specialist technical support. The session will also discuss the unique pedagogical affordances and challenges implicated in the location of the unit within an online learning environment, and will reflect on some of the lessons learned from the development which may be relevant to other authentic online learning contexts.
Resumo:
The main theme of this thesis is to allow the users of cloud services to outsource their data without the need to trust the cloud provider. The method is based on combining existing proof-of-storage schemes with distance-bounding protocols. Specifically, cloud customers will be able to verify the confidentiality, integrity, availability, fairness (or mutual non-repudiation), data freshness, geographic assurance and replication of their stored data directly, without having to rely on the word of the cloud provider.
Resumo:
“The Cube” is a unique facility that combines 48 large multi-touch screens and very large-scale projection surfaces to form one of the world’s largest interactive learning and engagement spaces. The Cube facility is part of the Queensland University of Technology’s (QUT) newly established Science and Engineering Centre, designed to showcase QUT’s teaching and research capabilities in the STEM (Science, Technology, Engineering, and Mathematics) disciplines. In this application paper we describe, the Cube, its technical capabilities, design rationale and practical day-to-day operations, supporting up to 70,000 visitors per week. Essential to the Cube’s operation are five interactive applications designed and developed in tandem with the Cube’s technical infrastructure. Each of the Cube’s launch applications was designed and delivered by an independent team, while the overall vision of the Cube was shepherded by a small executive team. The diversity of design, implementation and integration approaches pursued by these five teams provides some insight into the challenges, and opportunities, presented when working with large distributed interaction technologies. We describe each of these applications in order to discuss the different challenges and user needs they address, which types of interactions they support and how they utilise the capabilities of the Cube facility.
Resumo:
Based on a series of interviews of Australians between the ages of 55 and 75 this paper explores the relations between our participants’ attitudes towards and use of communication, social and tangible technologies and three relevant themes from our data: staying active, friends and families, and cultural selves. While common across our participants’ experiences of ageing, these themes were notable for the diverse ways they were experienced and expressed within individual lives and for the different roles technology was used for within each. A brief discussion of how the diversity of our ageing population implicates the design of emerging technologies ends the paper.
Resumo:
Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.
Resumo:
Advances in Information and Communication Technologies have the potential to improve many facets of modern healthcare service delivery. The implementation of electronic health records systems is a critical part of an eHealth system. Despite the potential gains, there are several obstacles that limit the wider development of electronic health record systems. Among these are the perceived threats to the security and privacy of patients’ health data, and a widely held belief that these cannot be adequately addressed. We hypothesise that the major concerns regarding eHealth security and privacy cannot be overcome through the implementation of technology alone. Human dimensions must be considered when analysing the provision of the three fundamental information security goals: confidentiality, integrity and availability. A sociotechnical analysis to establish the information security and privacy requirements when designing and developing a given eHealth system is important and timely. A framework that accommodates consideration of the legislative requirements and human perspectives in addition to the technological measures is useful in developing a measurable and accountable eHealth system. Successful implementation of this approach would enable the possibilities, practicalities and sustainabilities of proposed eHealth systems to be realised.
Resumo:
eHealth systems promise enviable benefits and capabilities for healthcare delivery. However, the technologies that make these capabilities possible introduce undesirable drawbacks such as information security related threats, which need to be appropriately addressed. Lurking in these threats are information privacy concerns. Addressing them has proven to be difficult because they often conflict with information access requirements of healthcare providers. Therefore, it is important to achieve an appropriate balance between these requirements. We contend that information accountability (IA) can achieve this balance. In this paper, we introduce accountable-eHealth (AeH) systems, which are eHealth systems that utilise IA as a measure of information privacy. We discuss how AeH system protocols can successfully achieve the aforementioned balance of requirements. As a means of implementation feasibility, we compare characteristics of AeH systems with Australia’s Personally Controlled Electronic Health Record (PCEHR) sys-tem and identify similarities and highlight the differences and the impact those differences would have to the eHealth domain.
Resumo:
CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. The CubIT system allows users to upload, interact with and share their own content on the Cube’s display surfaces. This paper outlines the collaborative features of CubIT which are implemented via three user interfaces, a large-screen multi-touch interface, a mobile phone and tablet application and a web-based content management system. Each of these applications plays a different role and supports different interaction mechanisms supporting a wide range of collaborative features including multi-user shared workspaces, drag and drop upload and sharing between users, session management and dynamic state control between different parts of the system.
Resumo:
Proud suggested that the biggest and most obvious impact of the digital world felt by academics, was in the area of teaching. He demonstrated a number of the initiatives which have been by developed by outside organizations and within various universities. Those include larger classrooms, online teaching and Blackboard. All of these were believed to provide improved learning by students, but, most commonly also expanded the faculty workload. He then discussed a number of the newer technologies which are becoming available such as the virtual classroom, Google Glass, Adobe online, Skype and others. All of these tools, he argued were in response to increasing economic pressures on the University, the result of which is that entire courses have migrated online. The reason for university interest in these new technologies were listed as reduced need for classrooms and classroom space, less need for on-campus facilities and even a decline in need for weekly in-class lectures. Thus, it has been argued that these new tools and technologies liberate the faculty from the tyranny of geography through the introduction of blogs, online videos, discussion forums and communication tools such as wikis, Facebook sites and Yammer, all of which seem to have specific advantages. The question raised, however, is: How successful have these new digital innovations been? As an example, he cited his own experience in teaching distance learning programs in Thailand and elsewhere. Those results are still being reviewed, with no definitive view developed.
Resumo:
Purpose This article reports on a research project that explored social media best practice in the public library sector. Design/methodology/approach The primary research approach for the project was case study. Two organisations participated in case studies that involved interviews, document analysis, and social media observation. Findings The two case study organisations use social media effectively to facilitate participatory networks, however, there have been challenges surrounding its implementation in both organisations. Challenges include negotiating requirements of governing bodies and broader organisational environments, and managing staff reluctance around the implementations. As social media use continues to grow and libraries continue to take up new platforms, social media must be considered to be another service point of the virtual branch, and indeed, for the library service as a whole. This acceptance of social media as being core business is critical to the successful implementation of social media based activities. Practical implications The article provides an empirically grounded discussion of best practice and the conditions that support it. The findings are relevant for information organisations across all sectors and could inform the development of policy and practice in other organisations. This paper contributes to the broader dialogue around best practice in participatory service delivery and social media use in library and information organisations.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
There is an increasing interest in the use of information technology as a participatory planning tool, particularly the use of geographical information technologies to support collaborative activities such as community mapping. However, despite their promise, the introduction of such technologies does not necessarily promote better participation nor improve collaboration. In part this can be attributed to a tendency for planners to focus on the technical considerations associated with these technologies at the expense of broader participation considerations. In this paper we draw on the experiences of a community mapping project with disadvantaged communities in suburban Australia to highlight the importance of selecting tools and techniques which support and enhance participatory planning. This community mapping project, designed to identify and document community-generated transport issues and solutions, had originally intended to use cadastral maps extracted from the government’s digital cadastral database as the foundation for its community mapping approach. It was quickly discovered that the local residents found the cadastral maps confusing as the maps lacked sufficient detail to orient them to their suburb (the study area). In response to these concerns and consistent with the project’s participatory framework, a conceptual base map based on resident’s views of landmarks of local importance was developed to support the community mapping process. Based on this community mapping experience we outline four key lessons learned regarding the process of community mapping and the place of geographical information technologies within this process.