475 resultados para Foundations Computer programs
Resumo:
In dynamic environments, firms seek to build capabilities which will permit them to become innovation and change ready. Programs offered by intermediaries, while varying greatly in content and format, are designed to support those firms wishing to enhance their competitiveness. Firms which participate in intermediary programs have displayed their willingness to overcome deficiencies or barriers to competitiveness through acquiring knowledge which is external to the firm. This paper reports on interviews with 24 firms who were involved in a MAP or TAP program offered by QMI Solutions. The findings of the research suggest that knowledge intermediaries serve to disrupt organisational paths and in so doing establish mechanisms for ongoing learning and change. They do this first by disrupting the firm with a positive learning experience and also by establishing processes for developing new relationships and access to knowledge which are critical for learning and change. It is the experience of learning through knowledge exchange which can trigger the pursuit of new paths and it is the processes involving new relations and knowledge processing that provides the micro-foundations for ongoing learning and change. This suggests that the role of intermediaries goes well beyond merely knowledge transfer to include longer term effects on the capability of organisations to innovate, which is critical to economic competitiveness and the survival rate of firms.
Resumo:
With the emergence of multi-cores into the mainstream, there is a growing need for systems to allow programmers and automated systems to reason about data dependencies and inherent parallelismin imperative object-oriented languages. In this paper we exploit the structure of object-oriented programs to abstract computational side-effects. We capture and validate these effects using a static type system. We use these as the basis of sufficient conditions for several different data and task parallelism patterns. We compliment our static type system with a lightweight runtime system to allow for parallelization in the presence of complex data flows. We have a functioning compiler and worked examples to demonstrate the practicality of our solution.
Resumo:
The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.
Resumo:
Type unions, pointer variables and function pointers are a long standing source of subtle security bugs in C program code. Their use can lead to hard-to-diagnose crashes or exploitable vulnerabilities that allow an attacker to attain privileged access over classified data. This paper describes an automatable framework for detecting such weaknesses in C programs statically, where possible, and for generating assertions that will detect them dynamically, in other cases. Exclusively based on analysis of the source code, it identifies required assertions using a type inference system supported by a custom made symbol table. In our preliminary findings, our type system was able to infer the correct type of unions in different scopes, without manual code annotations or rewriting. Whenever an evaluation is not possible or is difficult to resolve, appropriate runtime assertions are formed and inserted into the source code. The approach is demonstrated via a prototype C analysis tool.
Resumo:
We define a semantic model for purpose, based on which purpose-based privacy policies can be meaningfully expressed and enforced in a business system. The model is based on the intuition that the purpose of an action is determined by its situation among other inter-related actions. Actions and their relationships can be modeled in the form of an action graph which is based on the business processes in a system. Accordingly, a modal logic and the corresponding model checking algorithm are developed for formal expression of purpose-based policies and verifying whether a particular system complies with them. It is also shown through various examples, how various typical purpose-based policies as well as some new policy types can be expressed and checked using our model.
Resumo:
The impact of digital technology within the creative industries has brought with it a range of new opportunities for collaborative, cross-disciplinary and multi-disciplinary practice. Along with these opportunities has come the need to re-evaluate how we as educators approach teaching within this new digital culture. Within the field of animation, there has been a radical shift in the expectations of students, industry and educators as animation has become central to a range of new moving image practices. This paper interrogates the effectiveness of adopting a studio-based collaborative production project as a method for educating students within this new moving-image culture. The project was undertaken, as part of the Creative Industries Transitions to New Professional Environments program at Queensland University of Technology (QUT) in Brisbane Australia. A number of students studying across the Creative Industries Faculty and the Faculty of Science and Technology were invited to participate in the development of a 3D animated short film. The project offered students the opportunity to become actively involved in all stages of the creative process, allowing them to experience informal learning through collaborative professional practice. It is proposed that theoretical principles often associated with andragogy and constructivism can be used to design and deliver programs that address the emerging issues surrounding the teaching of this new moving image culture.
Resumo:
As computer applications become more available—both technically and economically—construction project managers are increasingly able to access advanced computer tools capable of transforming the role that project managers have typically performed. Competence at using these tools requires a dual commitment in training—from the individual and the firm. Improving the computer skills of project managers can provide construction firms with a competitive advantage to differentiate from others in an increasingly competitive international market. Yet, few published studies have quantified what existing level of competence construction project managers have. Identification of project managers’ existing computer application skills is a necessary first step to developing more directed training to better capture the benefits of computer applications. This paper discusses the yet to be released results of a series of surveys undertaken in Malaysia, Singapore, Indonesia, Australia and the United States through QUT’s School of Construction Management and Property and the M.E. Rinker, Sr. School of Building Construction at the University of Florida. This international survey reviews the use and reported competence in using a series of commercially-available computer applications by construction project managers. The five different country locations of the survey allow cross-national comparisons to be made between project managers undertaking continuing professional development programs. The results highlight a shortfall in the ability of construction project managers to capture potential benefits provided by advanced computer applications and provide directions for targeted industry training programs. This international survey also provides a unique insight to the cross-national usage of advanced computer applications and forms an important step in this ongoing joint review of technology and the construction project manager.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
This paper will present program developers and institutional administrators with a program delivery model suitable for cross cultural international delivery developing students from industry through to master’s level tertiary qualifications. The model was designed to meet the needs of property professionals from an industry where technical qualifications are the norm and tertiary qualifications are emerging. A further need was to develop and deliver a program that enhanced the University’s current program profile in both the domestic and international arenas. Early identification of international educational partners, industry need and the ability to service the program were vital to the successful development of Master of Property program. The educational foundations of the program rest in educational partners, local tutorial support, international course management, cultural awareness of and in content, online communication fora, with a delivery focus on problem-based learning, self-directed study, teamwork and the development of a global understanding and awareness of the international property markets. In enrolling students from a diverse cultural background with technical qualifications and/or extensive work experience there are a number of educational barriers to be overcome for all students to successfully progress and complete the program. These barriers disappear when the following mechanisms are employed: individual student pathways, tutorial support by qualified peers, enculturation into tertiary practice, assessment tasks that recognise cultural norms and values, and finally that value is placed on the experiential knowledge, cultural practices and belief systems of the students.
Resumo:
This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.
Resumo:
A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.