774 resultados para Could computing
Resumo:
We report on a longitudinal research study of the development of novice programmers in their first semester of programming. In the third week, almost half of our sample of students could not answer an explain-in-plain-English question, for code consisting of just three assignment statements, which swapped the values in two variables. We regard code that swaps the values of two variables as the simplest case of where a programming student can manifest a SOLO relational response. Our results demonstrate that the problems many students face with understanding code can begin very early, on relatively trivial code. However, using traditional programming exercises, these problems often go undetected until late in the semester. New approaches are required to detect and fix these problems earlier.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
This paper is a summary of a PhD thesis proposal. It will explore how the Web 2.0 platform could be applied to enable and facilitate the large-scale participation, deliberation and collaboration of both governmental and non-governmental actors in an ICT supported policy process. The paper will introduce a new democratic theory and a Web 2.0 based e-democracy platform, and demonstrate how different actors would use the platform to develop and justify policy issues.
Resumo:
The performing arts have traditionally made limited use of and showed limited acceptance of computing technology. There are cognitive, physical, environmental, and social influences on the use of computers in performing arts. This paper will examine those influences on the practice of computers in the performing arts and their implications for education in those areas. These implications for the learning environment include infrastructure, interface design, industrial design, and software functionality. Although many of the issues raised in this paper are common to all visual and performing arts, there are significant differences between them which require abstraction of the concepts presented in this paper beyond the more practical focus intended. In particular there are differences in the ways humans are involved in the presentation of a work, and the transitory verses static nature of time in art products.
Resumo:
We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.
Resumo:
Privacy has become one of the main impediments for e-health in its advancement to providing better services to its consumers. Even though many security protocols are being developed to protect information from being compromised, privacy is still a major issue in healthcare where privacy protection is very important. When consumers are confident that their sensitive information is safe from being compromised, their trust in these services will be higher and would lead to better adoption of these systems. In this paper we propose a solution to the problem of patient privacy in e-health through an information accountability framework could enhance consumer trust in e-health services and would lead to the success of e-health services.
Resumo:
The increasing ubiquity of digital technology, internet services and location-aware applications in our everyday lives allows for a seamless transitioning between the visible and the invisible infrastructure of cities: road systems, building complexes, information and communication technology, and people networks create a buzzing environment that is alive and exciting. Driven by curiosity, initiative and interdisciplinary exchange, the Urban Informatics Research Lab at Queensland University of Technology (QUT), Brisbane, Australia, is an emerging cluster of people interested in research and development at the intersection of people, place and technology with a focus on cities, locative media and mobile technology. This paper introduces urban informatics as a transdisciplinary practice across people, place and technology that can aid local governments, urban designers and planners in creating responsive and inclusive urban spaces and nurturing healthy cities. Three challenges are being discussed. First, people, and the challenge of creativity explores the opportunities and challenges of urban informatics that can lead to the design and development of new tools, methods and applications fostering participation, the democratisation of knowledge, and new creative practices. Second, technology, and the challenge of innovation examines how urban informatics can be applied to support user-led innovation with a view to promote entrepreneurial ideas and creative industries. Third, place, and the challenge of engagement discusses the potential to establish places within cities that are dedicated to place-based applications of urban informatics with a view to deliver community and civic engagement strategies.
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.
Resumo:
This paper describes a scene invariant crowd counting algorithm that uses local features to monitor crowd size. Unlike previous algorithms that require each camera to be trained separately, the proposed method uses camera calibration to scale between viewpoints, allowing a system to be trained and tested on different scenes. A pre-trained system could therefore be used as a turn-key solution for crowd counting across a wide range of environments. The use of local features allows the proposed algorithm to calculate local occupancy statistics, and Gaussian process regression is used to scale to conditions which are unseen in the training data, also providing confidence intervals for the crowd size estimate. A new crowd counting database is introduced to the computer vision community to enable a wider evaluation over multiple scenes, and the proposed algorithm is tested on seven datasets to demonstrate scene invariance and high accuracy. To the authors' knowledge this is the first system of its kind due to its ability to scale between different scenes and viewpoints.
Resumo:
Schools in Australia today are expected to improve student outcomes in an increasingly constrained context focused on accountability. At the same time as teachers work within these constraints, they are also working to ensure that all students are able to achieve quality outcomes from schooling while dealing with increasing diversity of the student population. So what is a teacher’s role in providing equitable possibilities for all of the students in their care? Teachers are left with the difficult task of balancing the many mandatory requirements placed on them, and on their students, such as gaining improvements on high stakes test scores, with the important work of dealing with individual students and student cohorts in equitable and socially just ways. This impacts on the work of all teachers, however for those teachers working in schools in contexts of complexity and duress these pressures are never greater.