887 resultados para Set design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method called "SymbolDesign" is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the implications of the effectuation concept for socio-technical artifact design as part of the design science research (DSR) process in information systems (IS). Effectuation logic is the opposite of causal logic. Ef-fectuation does not focus on causes to achieve a particular effect, but on the possibilities that can be achieved with extant means and resources. Viewing so-cio-technical IS DSR through an effectuation lens highlights the possibility to design the future even without set goals. We suggest that effectuation may be a useful perspective for design in dynamic social contexts leading to a more dif-ferentiated view on the instantiation of mid-range artifacts for specific local ap-plication contexts. Design science researchers can draw on this paper’s conclu-sions to view their DSR projects through a fresh lens and to reexamine their re-search design and execution. The paper also offers avenues for future research to develop more concrete application possibilities of effectuation in socio-technical IS DSR and, thus, enrich the discourse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All biological phenomena depend on molecular recognition, which is either intermolecular like in ligand binding to a macromolecule or intramolecular like in protein folding. As a result, understanding the relationship between the structure of proteins and the energetics of their stability and binding with others (bio)molecules is a very interesting point in biochemistry and biotechnology. It is essential to the engineering of stable proteins and to the structure-based design of pharmaceutical ligands. The parameter generally used to characterize the stability of a system (the folded and unfolded state of the protein for example) is the equilibrium constant (K) or the free energy (deltaG(o)), which is the sum of enthalpic (deltaH(o)) and entropic (deltaS(o)) terms. These parameters are temperature dependent through the heat capacity change (deltaCp). The thermodynamic parameters deltaH(o) and deltaCp can be derived from spectroscopic experiments, using the van't Hoff method, or measured directly using calorimetry. Along with isothermal titration calorimetry (ITC), differential scanning calorimetry (DSC) is a powerful method, less described than ITC, for measuring directly the thermodynamic parameters which characterize biomolecules. In this article, we summarize the principal thermodynamics parameters, describe the DSC approach and review some systems to which it has been applied. DSC is much used for the study of the stability and the folding of biomolecules, but it can also be applied in order to understand biomolecular interactions and can thus be an interesting technique in the process of drug design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of a new apparatus, named FANTASIO, for studying jet-cooled molecules is described. It includes, around the same supersonic expansion cell, a high resolution Fourier transform spectrometer with single or multipass optics, a tunable diode laser spectrometer with optional cavity ring-down facilities, and a quadrupole mass spectrometer. Performance and operational procedures are illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes progress on a project to utilise case based reasoning methods in the design and manufacture of furniture products. The novel feature of this research is that cases are represented as structures in a relational database of products, components and materials. The paper proposes a method for extending the usual "weighted sum" over attribute similarities for a ·single table to encompass relational structures over several tables. The capabilities of the system are discussed, particularly with respect to differing user objectives, such as cost estimation, CAD, cutting scheme re-use, and initial design. It is shown that specification of a target case as a relational structure combined with suitable weights can fulfil several user functions. However, it is also shown that some user functions cannot satisfactorily be specified via a single target case. For these functions it is proposed to allow the specification of a set of target cases. A derived similarity measure between individuals and sets of cases is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines different ways for measuring similarity between software design models for the purpose of software reuse. Current approaches to this problem are discussed and a set of suitable similarity metrics are proposed and evaluated. Work on the optimisation of weights to increase the competence of a CBR system is presented. A graph matching algorithm and associated metrics capturing the structural similarity between UML class diagrams is presented and demonstrated through an example case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluating ship layout for human factors (HF) issues using simulation software such as maritimeEXODUS can be a long and complex process. The analysis requires the identification of relevant evaluation scenarios; encompassing evacuation and normal operations; the development of appropriate measures which can be used to gauge the performance of crew and vessel and finally; the interpretation of considerable simulation data. Currently, the only agreed guidelines for evaluating HFs performance of ship design relate to evacuation and so conclusions drawn concerning the overall suitability of a ship design by one naval architect can be quite different from those of another. The complexity of the task grows as the size and complexity of the vessel increases and as the number and type of evaluation scenarios considered increases. Equally, it can be extremely difficult for fleet operators to set HFs design objectives for new vessel concepts. The challenge for naval architects is to develop a procedure that allows both accurate and rapid assessment of HFs issues associated with vessel layout and crew operating procedures. In this paper we present a systematic and transparent methodology for assessing the HF performance of ship design which is both discriminating and diagnostic. The methodology is demonstrated using two variants of a hypothetical naval ship.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rapid tryptophan (Trp) depletion (RTD) has been reported to cause deterioration in the quality of decision making and impaired reversal learning, while leaving attentional set shifting relatively unimpaired. These findings have been attributed to a more powerful neuromodulatory effect of reduced 5-HT on ventral prefrontal cortex (PFC) than on dorsolateral PFC. In view of the limited number of reports, the aim of this study was to independently replicate these findings using the same test paradigms. Healthy human subjects without a personal or family history of affective disorder were assessed using a computerized decision making/gambling task and the CANTAB ID/ED attentional set-shifting task under Trp-depleted (n=17; nine males and eight females) or control (n=15; seven males and eight females) conditions, in a double-blind, randomized, parallel-group design. There was no significant effect of RTD on set shifting, reversal learning, risk taking, impulsivity, or subjective mood. However, RTD significantly altered decision making such that depleted subjects chose the more likely of two possible outcomes significantly more often than controls. This is in direct contrast to the previous report that subjects chose the more likely outcome significantly less often following RTD. In the terminology of that report, our result may be interpreted as improvement in the quality of decision making following RTD. This contrast between studies highlights the variability in the cognitive effects of RTD between apparently similar groups of healthy subjects, and suggests the need for future RTD studies to control for a range of personality, family history, and genetic factors that may be associated with 5-HT function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel application-specific instruction set processor (ASIP) for use in the construction of modern signal processing systems is presented. This is a flexible device that can be used in the construction of array processor systems for the real-time implementation of functions such as singular-value decomposition (SVD) and QR decomposition (QRD), as well as other important matrix computations. It uses a coordinate rotation digital computer (CORDIC) module to perform arithmetic operations and several approaches are adopted to achieve high performance including pipelining of the micro-rotations, the use of parallel instructions and a dual-bus architecture. In addition, a novel method for scale factor correction is presented which only needs to be applied once at the end of the computation. This also reduces computation time and enhances performance. Methods are described which allow this processor to be used in reduced dimension (i.e., folded) array processor structures that allow tradeoffs between hardware and performance. The net result is a flexible matrix computational processing element (PE) whose functionality can be changed under program control for use in a wider range of scenarios than previous work. Details are presented of the results of a design study, which considers the application of this decomposition PE architecture in a combined SVD/QRD system and demonstrates that a combination of high performance and efficient silicon implementation are achievable. © 2005 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An application specific programmable processor (ASIP) suitable for the real-time implementation of matrix computations such as Singular Value and QR Decomposition is presented. The processor incorporates facilities for the issue of parallel instructions and a dual-bus architecture that are designed to achieve high performance. Internally, it uses a CORDIC module to perform arithmetic operations, with pipelining of the internal recursive loop exploited to multiplex the two independent micro-rotations onto a single piece of hardware. The net result is a flexible processing element whose functionality can be changed under program control, which combines high performance with efficient silicon implementation. This is illustrated through the results of a detailed silicon design study and the applications of the techniques to a combined SVD/QRD system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper summarizes numerous research activities in high-performance networks and network security processing, and explores technology related performance constraints such as critical performance limitations of circuit architectures, which are set by the semiconductor technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.