852 resultados para User interfaces (Computer systems)
Resumo:
Gemstone Team Small Business Solutions
Resumo:
An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.
Resumo:
While the number of traditional laptops and computers sold has dipped slightly year over year, manufacturers have developed new hybrid laptops with touch screens to build on the tactile trend. This market is moving quickly to make touch the rule rather than the exception and the sales of these devices have tripled since the launch of Windows 8 in 2012, to reach more than sixty million units sold in 2015. Unlike tablets, that benefit from easy-to-use applications specially designed for tactile interactions, hybrid laptops are intended to be used with regular user-interfaces. Hence, one could ask whether tactile interactions are suited for every task and activity performed with such interfaces. Since hybrid laptops are increasingly used in educational situations, this study focuses on information search tasks which are commonly performed for learning purposes. It is hypothesized that tasks that require complex and/or less common gestures will increase user's cognitive load and impair task performance in terms of efficacy and efficiency. A study was carried out in a usability laboratory with 30 participants for whom prior experience with tactile devices has been controlled. They were asked to perform information search tasks on an online encyclopaedia by using only the touch screen of and hybrid laptop. Tasks were selected with respect to their level of cognitive demand (amount of information that had to be maintained in working memory) and the complexity of gestures needed (left and/or right clicks, zoom, text selection and/or input.), and grouped into 4 sets accordingly. Task performance was measured by the number of tasks succeeded (efficacy) and time spent on each task (efficiency). Perceived cognitive load was assessed thanks to a questionnaire given after each set of tasks. An eye tracking device was used to monitor users' attention allocation and to provide objective cognitive load measures based on pupil dilation and the Index of Cognitive Activity. Each experimental run took approximately one hour. The results of this within-subjects design indicate that tasks involving complex gestures led to a lower efficacy, especially when the tasks were cognitively demanding. Regarding efficacy, there is no significant differences between sets of tasks excepted for tasks with low cognitive demand and complex gestures that required more time to be achieved. Surprisingly, users that declared the biggest experience with tactile devices spent more time than less frequent users. Cognitive load measures indicate that participants reported having devoted more mental effort in the interaction when they had to use complex gestures.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for distributing unstructured meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut edge weight in the graph with the aim of minimising the parallel communication overhead, but recently there has been a perceived need to take into account the communications network of the parallel machine. For example the increasing use of SMP clusters (systems of multiprocessor compute nodes with very fast intra-node communications but relatively slow inter-node networks) suggest the use of hierarchical network models. Indeed this requirement is exacerbated in the early experiments with meta-computers (multiple supercomputers combined together, in extreme cases over inter-continental networks). In this paper therefore, we modify a multilevel algorithm in order to minimise a cost function based on a model of the communications network. Several network models and variants of the algorithm are tested and we establish that it is possible to successfully guide the optimisation to reflect the chosen architecture.
Resumo:
This paper describes an interactive parallelisation toolkit that can be used to generate parallel code suitable for either a distributed memory system (using message passing) or a shared memory system (using OpenMP). This study focuses on how the toolkit is used to parallelise a complex heterogeneous ocean modelling code within a few hours for use on a shared memory parallel system. The generated parallel code is essentially the serial code with OpenMP directives added to express the parallelism. The results show that substantial gains in performance can be achieved over the single thread version with very little effort.
Resumo:
The open service network for marine environmental data (NETMAR) project uses semantic web technologies in its pilot system which aims to allow users to search, download and integrate satellite, in situ and model data from open ocean and coastal areas. The semantic web is an extension of the fundamental ideas of the World Wide Web, building a web of data through annotation of metadata and data with hyperlinked resources. Within the framework of the NETMAR project, an interconnected semantic web resource was developed to aid in data and web service discovery and to validate Open Geospatial Consortium Web Processing Service orchestration. A second semantic resource was developed to support interoperability of coastal web atlases across jurisdictional boundaries. This paper outlines the approach taken to producing the resource registry used within the NETMAR project and demonstrates the use of these semantic resources to support user interactions with systems. Such interconnected semantic resources allow the increased ability to share and disseminate data through the facilitation of interoperability between data providers. The formal representation of geospatial knowledge to advance geospatial interoperability is a growing research area. Tools and methods such as those outlined in this paper have the potential to support these efforts.
Resumo:
Models and software products have been developed for modelling, simulation and prediction of different correlations in materials science, including 1. the correlation between processing parameters and properties in titanium alloys and ?-titanium aluminides; 2. time–temperature–transformation (TTT) diagrams for titanium alloys; 3. corrosion resistance of titanium alloys; 4. surface hardness and microhardness profile of nitrocarburised layers; 5. fatigue stress life (S–N) diagrams for Ti–6Al–4V alloys. The programs are based on trained artificial neural networks. For each particular case appropriate combination of inputs and outputs is chosen. Very good performances of the models are achieved. Graphical user interfaces (GUI) are created for easy use of the models. In addition interactive text versions are developed. The models designed are combined and integrated in software package that is built up on a modular fashion. The software products are available in versions for different platforms including Windows 95/98/2000/NT, UNIX and Apple Macintosh. Description of the software products is given, to demonstrate that they are convenient and powerful tools for practical applications in solving various problems in materials science. Examples for optimisation of the alloy compositions, processing parameters and working conditions are illustrated. An option for use of the software in materials selection procedure is described.
Resumo:
This paper, chosen as a best paper from the 2004 SAMOS Workshop on Computer Systems: describes a novel, efficient methodology for automatically creating embedded DSP computer systems. The novelty arises since now embedded electronic signal processing systems, such as radar or sonar, can be designed by anyone from the algorithm level, i.e. no low level system design experience is required, whilst still achieving low controllable implementation overheads and high real time performance. In the chosen design example, a bank of Normalised Lattice Filter (NLF) components is created which a four-fold reduction in the required processing resource with no performance decrease.
Resumo:
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.
Resumo:
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.
Resumo:
Generation of hardware architectures directly from dataflow representations is increasingly being considered as research moves toward system level design methodologies. Creation of networks of IP cores to implement actor functionality is a common approach to the problem, but often the memory sub-systems produced using these techniques are inefficiently utilised. This paper explores some of the issues in terms of memory organisation and accesses when developing systems from these high level representations. Using a template matching design study, challenges such as modelling memory reuse and minimising buffer requirements are examined, yielding results with significantly less memory requirements and costly off-chip memory accesses.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.