112 resultados para Graphics processing unit programming
Resumo:
These notes follow on from the material that you studied in CSSE1000 Introduction to Computer Systems. There you studied details of logic gates, binary numbers and instruction set architectures using the Atmel AVR microcontroller family as an example. In your present course (METR2800 Team Project I), you need to get on to designing and building an application which will include such a microcontroller. These notes focus on programming an AVR microcontroller in C and provide a number of example programs to illustrate the use of some of the AVR peripheral devices.
Resumo:
The demand for more pixels is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers a solution to augment the pixel capacity of a display. However, problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. We present the results obtained on a desktop-size tiled projector array of three D-ILA projectors sharing a common illumination source. A short throw lens (0.8:1) on each projector yields a 21-in. diagonal for each image tile; the composite image on a 3×1 array is 3840×1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact, and can fit in a normal room or laboratory. The projectors are mounted on precision six-axis positioners, which allow pixel level alignment. A fiber optic beamsplitting system and a single set of red, green, and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted separately to set or change characteristics such as contrast, brightness, or gamma curves. The projectors were then matched carefully: photometric variations were corrected, leading to a seamless image. Photometric measurements were performed to characterize the display and are reported here. This system is driven by a small PC cluster fitted with graphics cards and running Linux. It can be scaled to accommodate an array of 2×3 or 3×3 projectors, thus increasing the number of pixels of the final image. Finally, we present current uses of the display in fields such as astrophysics and archaeology (remote sensing).
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
This Toolkit was developed for the Australian dairy processing industry on behalf of Dairy Australia. At the conclusion of the project, industry participants gained exclusive access to a comprehensive Eco-Efficiency Manual, which outlined many of the opportunities available to the industry. Summary fact sheets were also prepared as publicly available resources and these are available for download below
Resumo:
This manual has been developed to help the Australian dairy processing industry increase its competitiveness through increased awareness and uptake of eco-efficiency. The manual seeks to consolidate and build on existing knowledge, accumulated through projects and initiatives that the industry has previously undertaken to improve its use of raw materials and resources and reduce the generation of wastes. Where there is an existing comprehensive report or publication, the manual refers to this for further information. Eco-efficiency is about improving environmental performance to become more efficient and profitable. It is about producing more with less. It involves applying strategies that will not only ensure efficient use of resources and reduction in waste, but will also reduce costs. This chapter outlines the environmental challenges faced by Australian dairy processors. The manual explores opportunities for reducing environmental impacts in relation to water, energy, product yield, solid and liquid waste reduction and chemical use.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Efficiency of presentation of a peptide epitope by a MHC class I molecule depends on two parameters: its binding to the MHC molecule and its generation by intracellular Ag processing. In contrast to the former parameter, the mechanisms underlying peptide selection in Ag processing are poorly understood. Peptide translocation by the TAP transporter is required for presentation of most epitopes and may modulate peptide supply to MHC class I molecules. To study the role of human TAP for peptide presentation by individual HLA class I molecules, we generated artificial neural networks capable of predicting the affinity of TAP for random sequence 9-mer peptides. Using neural network-based predictions of TAP affinity, we found that peptides eluted from three different HLA class I molecules had higher TAP affinities than control peptides with equal binding affinities for the same HLA class I molecules, suggesting that human TAP may contribute to epitope selection. In simulated TAP binding experiments with 408 HLA class I binding peptides, HLA class I molecules differed significantly with respect to TAP affinities of their ligands, As a result, some class I molecules, especially HLA-B27, may be particularly efficient in presentation of cytosolic peptides with low concentrations, while most class I molecules may predominantly present abundant cytosolic peptides.
Resumo:
This note considers the value of surface response equations which can be used to calculate critical values for a range of unit root and cointegration tests popular in applied economic research.