48 resultados para Optics in computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The worldwide scarcity of women studying or employed in ICT, or in computing related disciplines, continues to be a topic of concern for industry, the education sector and governments. Within Europe while females make up 46% of the workforce only 17% of IT staff are female. A similar gender divide trend is repeated worldwide, with top technology employers in Silicon Valley, including Facebook, Google, Twitter and Apple reporting that only 30% of the workforce is female (Larson 2014). Previous research into this gender divide suggests that young women in Secondary Education display a more negative attitude towards computing than their male counterparts. It would appear that the negative female perception of computing has led to representatively low numbers of women studying ICT at a tertiary level and consequently an under representation of females within the ICT industry. The aim of this study is to 1) establish a baseline understanding of the attitudes and perceptions of Secondary Education pupils in regard to computing and 2) statistically establish if young females in Secondary Education really do have a more negative attitude towards computing.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of computer-based devices for music control has created a need to study how spectators understand new performance technologies and practices. As a part of a larger project examining how interactions with technology can be communicated to spectators, we present a model of a spectator's understanding of error by a performer. This model is broadly applicable throughout HCI, as interactions with technology are increasingly public and spectatorship is becoming more common.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Simulations of the injection stretch-blow moulding process have been developed for the manufacture of poly(ethylene terephthalate) bottles using the commercial finite element package ABAQUS/standard. Initially a simulation of the manufacture of a 330 mL bottle was developed with three different material models (hyperelastic, creep, and a non-linear viscoelastic model (Buckley model)) to ascertain their suitability for modelling poly(ethylene terephthalate). The Buckley model was found to give results for the sidewall thickness that matched best with those measured from bottles off the production line. Following the investigation of the material models, the Buckley model was chosen to conduct a three-dimensional simulation of the manufacture of a 2 L bottle. It was found that the model was also capable of predicting the wall thickness distribution accurately for this bottle. In the development of the three-dimensional simulation a novel approach, which uses an axisymmetric model until the material reaches the petaloid base, was developed. This resulted in substantial savings in computing time. © 2000 IoM Communication Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a parallel-matching processor architecture with early jump-out (EJO) control is proposed to carry out high-speed biometric fingerprint database retrieval. The processor performs the fingerprint retrieval by using minutia point matching. An EJO method is applied to the proposed architecture to speed up the large database retrieval. The processor is implemented on a Xilinx Virtex-E, and occupies 6,825 slices and runs at up to 65 MHz. The software/hardware co-simulation benchmark with a database of 10,000 fingerprints verifies that the matching speed can achieve the rate of up to 1.22 million fingerprints per second. EJO results in about a 22% gain in computing efficiency.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Privacy has now become a major topic not only in law but in computing, psychology, economics and social studies, and the explosion in scholarship has made it difficult for the student to traverse the field and identify the significant issues across the many disciplines. This series brings together a collection of significant papers with a multi-disciplinary approach which enable the reader to navigate through the complexities of the issues and make sense of the prolific scholarship published in this field.

The three volumes in this series address different themes: an anthropological approach to what privacy means in a cultural context; the issue of state surveillance where the state must both protect the individual and protect others from that individual and also protect itself; and, finally, what privacy might mean in a world where government and commerce collect data incessantly. The regulation of privacy is continually being called for and these papers help enable understanding of the ethical rationales behind the choices made in the sphere of regulation of privacy.

The articles presented in each of these collections have been chosen for the quality of their scholarship and their utility to the researcher, and feature a variety of approaches. The articles which debate the technical context of privacy are accessible to those from the arts and humanities; overall, the breadth of approach taken in the choice of articles has created a series which is an invaluable and important resource for lecturers, researchers and student.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Capillary-based systems for measuring the input impedance of musical wind instruments were first developed in the mid-20th century and remain in widespread use today. In this paper, the basic principles and assumptions underpinning the design of such systems are examined. Inexpensive modifications to a capillary-based impedance measurement set-up made possible due to advances in computing and data acquisition technology are discussed. The modified set-up is able to measure both impedance magnitude and impedance phase even though it only contains one microphone. In addition, a method of calibration is described that results in a significant improvement in accuracy when measuring high impedance objects on the modified capillary-based system. The method involves carrying out calibration measurements on two different objects whose impedances are well-known theoretically. The benefits of performing two calibration measurements (as opposed to the one calibration measurement that has been traditionally used) are demonstrated experimentally through input impedance measurements on two test objects and a Boosey and Hawkes oboe. © S. Hirzel Verlag · EAA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A bit-level systolic array for computing matrix x vector products is described. The operation is carried out on bit parallel input data words and the basic circuit takes the form of a 1-bit slice. Several bit-slice components must be connected together to form the final result, and authors outline two different ways in which this can be done. The basic array also has considerable potential as a stand-alone device, and its use in computing the Walsh-Hadamard transform and discrete Fourier transform operations is briefly discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent trends in computing systems, such as multi-core processors and cloud computing, expose tens to thousands of processors to the software. Software developers must respond by introducing parallelism in their software. To obtain highest performance, it is not only necessary to identify parallelism, but also to reason about synchronization between threads and the communication of data from one thread to another. This entry gives an overview on some of the most common abstractions that are used in parallel programming, namely explicit vs. implicit expression of parallelism and shared and distributed memory. Several parallel programming models are reviewed and categorized by means of these abstractions. The pros and cons of parallel programming models from the perspective of performance and programmability are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic Voltage and Frequency Scaling (DVFS) exhibits fundamental limitations as a method to reduce energy consumption in computing systems. In the HPC domain, where performance is of highest priority and codes are heavily optimized to minimize idle time, DVFS has limited opportunity to achieve substantial energy savings. This paper explores if operating processors Near the transistor Threshold Volt- age (NTV) is a better alternative to DVFS for break- ing the power wall in HPC. NTV presents challenges, since it compromises both performance and reliability to reduce power consumption. We present a first of its kind study of a significance-driven execution paradigm that selectively uses NTV and algorithmic error tolerance to reduce energy consumption in performance- constrained HPC environments. Using an iterative algorithm as a use case, we present an adaptive execution scheme that switches between near-threshold execution on many cores and above-threshold execution on one core, as the computational significance of iterations in the algorithm evolves over time. Using this scheme on state-of-the-art hardware, we demonstrate energy savings ranging between 35% to 67%, while compromising neither correctness nor performance.