347 resultados para Debugging in computer science.
Resumo:
This special issue aims to provide up-to-date knowledge and the latest scientific concepts and technological developments in the processing, characterization, testing, mechanics, modeling and applications of a broad range of advanced materials. The many contributors, from Denmark, Germany, UK, Iran, Saudi Arabia, Malaysia, Japan, the People’s Republic of China, Singapore, Taiwan, USA, New Zealand and Australia, present a wide range of topics including: nanomaterials, thin films and coatings, metals and alloys, composite materials, materials processing and characterization, biomaterials and biomechanics, and computational materials science and simulation. The work will therefore be of great interest to a broad spectrum of researchers and technologists.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
With the emergence of multi-cores into the mainstream, there is a growing need for systems to allow programmers and automated systems to reason about data dependencies and inherent parallelismin imperative object-oriented languages. In this paper we exploit the structure of object-oriented programs to abstract computational side-effects. We capture and validate these effects using a static type system. We use these as the basis of sufficient conditions for several different data and task parallelism patterns. We compliment our static type system with a lightweight runtime system to allow for parallelization in the presence of complex data flows. We have a functioning compiler and worked examples to demonstrate the practicality of our solution.
Resumo:
In this paper you will be introduced to a number of principles which can be used to inform good teaching practice and rigorous curriculum design. Principles relate to: * Application of a common sequence of events for how learners learn; * Accommodating different learning styles; * Adopting a purposeful approach to teaching and learning; * Using assessment as a central driving force in the curriculum and as an organising structure leading to coherence of teaching and learning approach; and * The increasing emphasis that is being placed on the development of generic graduate competencies over and above discipline content knowledge. The principles are particularly significant in relation to adult learning. The paper will use three specific applications as illustrations to help you to learn how these principles can be applied. The illustrations are taken from a second year subject in supercomputing that uses scientific case studies. The subject has been developed (with support from Silicon Graphics Inc. and Intel) to be taught entirely via the Internet.
Resumo:
Minimizing complexity of group key exchange (GKE) protocols is an important milestone towards their practical deployment. An interesting approach to achieve this goal is to simplify the design of GKE protocols by using generic building blocks. In this paper we investigate the possibility of founding GKE protocols based on a primitive called multi key encapsulation mechanism (mKEM) and describe advantages and limitations of this approach. In particular, we show how to design a one-round GKE protocol which satisfies the classical requirement of authenticated key exchange (AKE) security, yet without forward secrecy. As a result, we obtain the first one-round GKE protocol secure in the standard model. We also conduct our analysis using recent formal models that take into account both outsider and insider attacks as well as the notion of key compromise impersonation resilience (KCIR). In contrast to previous models we show how to model both outsider and insider KCIR within the definition of mutual authentication. Our analysis additionally implies that the insider security compiler by Katz and Shin from ACM CCS 2005 can be used to achieve more than what is shown in the original work, namely both outsider and insider KCIR.
Resumo:
We give a direct construction of a certificateless key encapsulation mechanism (KEM) in the standard model that is more efficient than the generic constructions proposed before by Huang and Wong \cite{DBLP:conf/acisp/HuangW07}. We use a direct construction from Kiltz and Galindo's KEM scheme \cite{DBLP:conf/acisp/KiltzG06} to obtain a certificateless KEM in the standard model; our construction is roughly twice as efficient as the generic construction. We also address the security flaw discovered by Selvi et al. \cite{cryptoeprint:2009:462}.
Resumo:
Recommender systems are widely used online to help users find other products, items etc that they may be interested in based on what is known about that user in their profile. Often however user profiles may be short on information and thus it is difficult for a recommender system to make quality recommendations. This problem is known as the cold-start problem. Here we investigate using association rules as a source of information to expand a user profile and thus avoid this problem. Our experiments show that it is possible to use association rules to noticeably improve the performance of a recommender system under the cold-start situation. Furthermore, we also show that the improvement in performance obtained can be achieved while using non-redundant rule sets. This shows that non-redundant rules do not cause a loss of information and are just as informative as a set of association rules that contain redundancy.
Resumo:
Personalised social matching systems can be seen as recommender systems that recommend people to others in the social networks. However, with the rapid growth of users in social networks and the information that a social matching system requires about the users, recommender system techniques have become insufficiently adept at matching users in social networks. This paper presents a hybrid social matching system that takes advantage of both collaborative and content-based concepts of recommendation. The clustering technique is used to reduce the number of users that the matching system needs to consider and to overcome other problems from which social matching systems suffer, such as cold start problem due to the absence of implicit information about a new user. The proposed system has been evaluated on a dataset obtained from an online dating website. Empirical analysis shows that accuracy of the matching process is increased, using both user information (explicit data) and user behavior (implicit data).
Resumo:
As organizations reach to higher levels of business process management maturity, they often find themselves maintaining repositories of hundreds or even thousands of process models, representing valuable knowledge about their operations. Over time, process model repositories tend to accumulate duplicate fragments (also called clones) as new process models are created or extended by copying and merging fragments from other models. This calls for methods to detect clones in process models, so that these clones can be refactored as separate subprocesses in order to improve maintainability. This paper presents an indexing structure to support the fast detection of clones in large process model repositories. The proposed index is based on a novel combination of a method for process model decomposition (specifically the Refined Process Structure Tree), with established graph canonization and string matching techniques. Experiments show that the algorithm scales to repositories with hundreds of models. The experimental results also show that a significant number of non-trivial clones can be found in process model repositories taken from industrial practice.
Resumo:
The concept of organismic asymmetry refers to an inherent bias for seeking explanations of human performance and behaviour based on internal mechanisms and referents. A weakness in this tendency is a failure to consider the performer–environment relationship as the relevant scale of analysis. In this paper we elucidate the philosophical roots of the bias and discuss implications of organismic asymmetry for sport science and performance analysis, highlighting examples in psychology, sports medicine and biomechanics.
Resumo:
This paper proposes and synthesizes from previous design science(DS) methodological literature a structured and detailed DS Roadmap for the conduct of DS research. The Roadmap is a general guide for researchers to carry out DS research by suggesting reasonably detailed activities.Though highly tentative, it is believed the Roadmap usefully inter-relates many otherwise seemingly disparate, overlapping or conflicting concepts. It is hoped the DS Roadmap will aid in the planning, execution and communication of DS research,while also attracting constructive criticism, improvements and extensions. A key distinction of the Roadmap from other DS research methods is its breadth of coverage of DS research aspects and activities; its detail and scope. We demonstrate and evaluate the Roadmap by presenting two case studies in terms of the DS Roadmap.
Resumo:
In many prediction problems, including those that arise in computer security and computational finance, the process generating the data is best modelled as an adversary with whom the predictor competes. Even decision problems that are not inherently adversarial can be usefully modeled in this way, since the assumptions are sufficiently weak that effective prediction strategies for adversarial settings are very widely applicable.