103 resultados para COMPUTER SCIENCE, THEORY


Relevância:

90.00% 90.00%

Publicador:

Resumo:

<p>A generic architecture for implementing the advanced encryption standard (AES) encryption algorithm in silicon is proposed. This allows the instantiation of a wide range of chip specifications, with these taking the form of semiconductor intellectual property (IP) cores. Cores implemented from this architecture can perform both encryption and decryption and support four modes of operation: (i) electronic codebook mode; (ii) output feedback mode; (iii) cipher block chaining mode; and (iv) ciphertext feedback mode. Chip designs can also be generated to cover all three AES key lengths, namely 128 bits, 192 bits and 256 bits. On-the-fly generation of the round keys required during decryption is also possible. The general, flexible and multi-functional nature of the approach described contrasts with previous designs which, to date, have been focused on specific implementations. The presented ideas are demonstrated by implementation in FPGA technology. However, the architecture and IP cores derived from this are easily migratable to other silicon technologies including ASIC and PLD and are capable of covering a wide range of modem communication systems cryptographic requirements. Moreover, the designs produced have a gate count and throughput comparable with or better than the previous one-off solutions.</p>

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or â˜splitâ, into multiple sections. A lecture might be too large to fit into any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall â˜utilisationâ which is basically the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are based on a local search method that attempts to optimise the space utilisation by means of a â˜dynamic splittingâ strategy. The local moves are designed to improve utilisation and satisfy the other constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A method extending narrative analysis with grounded theory analysis is proposed to bridge the gap between breadth and depth in IS narrative research. The purpose of the method is not to develop a theory but to make narrative analysis more accessible, transparent and accountable; and the resultant narrative more contextually grounded. The method is aimed particularly at inexperienced narrative researchers who currently lack guidance through the complexity of narrative analysis, but may also benefit experienced narrative researchers who may not be familiar with the applicability of grounded theory tools and techniques in this area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

WebCom-G is a fledgling Grid Operating System, designed to provide independent service access through interoperability with existing middlewares. It offers an expressive programming model that automatically handles task synchronisation â load balancing, fault tolerance, and task allocation are handled at the WebCom-G system level â without burdening the application writer. These characteristics, together with the ability of its computing model to mix evaluation strategies to match the characteristics of the geographically dispersed facilities and the overall problem- solving environment, make WebCom-G a promising grid middleware candidate.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Information retrieval in the age of Internet search engines has become part of ordinary discourse and everyday practice: "Google" is a verb in common usage. Thus far, more attention has been given to practical understanding of information retrieval than to a full theoretical account. In Human Information Retrieval, Julian Warner offers a comprehensive overview of information retrieval, synthesizing theories from different disciplines (information and computer science, librarianship and indexing, and information society discourse) and incorporating such disparate systems as WorldCat and Google into a single, robust theoretical framework. There is a need for such a theoretical treatment, he argues, one that reveals the structure and underlying patterns of this complex field while remaining congruent with everyday practice. Warner presents a labor theoretic approach to information retrieval, building on his previously formulated distinction between semantic and syntactic mental labor, arguing that the description and search labor of information retrieval can be understood as both semantic and syntactic in character. Warner's information science approach is rooted in the humanities and the social sciences but informed by an understanding of information technology and information theory. The chapters offer a progressive exposition of the topic, with illustrative examples to explain the concepts presented. Neither narrowly practical nor largely speculative, Human Information Retrieval meets the contemporary need for a broader treatment of information and information systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents a novel classification of wavelet neural networks based on the orthogonality/non-orthogonality of neurons and the type of nonlinearity employed. On the basis of this classification different network types are studied and their characteristics illustrated by means of simple one-dimensional nonlinear examples. For multidimensional problems, which are affected by the curse of dimensionality, the idea of spherical wavelet functions is considered. The behaviour of these networks is also studied for modelling of a low-dimension map.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a novel modelling framework for identifying dynamic models of systems that are under feedback control. These models are identified under closed-loop conditions and produce a joint representation that includes both the plant and controller models in state space form. The joint plant/controller model is identified using subspace model identification (SMI), which is followed by the separation of the plant model from the identified one. Compared to previous research, this work (i) proposes a new modelling framework for identifying closed-loop systems, (ii) introduces a generic structure to represent the controller and (iii) explains how that the new framework gives rise to a simplified determination of the plant models. In contrast, the use of the conventional modelling approach renders the separation of the plant model a difficult task. The benefits of using the new model method are demonstrated using a number of application studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The divide-and-conquer approach of local model (LM) networks is a common engineering approach to the identification of a complex nonlinear dynamical system. The global representation is obtained from the weighted sum of locally valid, simpler sub-models defined over small regions of the operating space. Constructing such networks requires the determination of appropriate partitioning and the parameters of the LMs. This paper focuses on the structural aspect of LM networks. It compares the computational requirements and performances of the Johansen and Foss (J&amp;F) and LOLIMOT tree-construction algorithms. Several useful and important modifications to each algorithm are proposed. The modelling performances are evaluated using real data from a pilot plant of a pH neutralization process. Results show that while J&amp;F achieves a more accurate nonlinear representation of the pH process, LOLIMOT requires significantly less computational effort.