97 resultados para Computer Hardware.
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper analyses the efficiency and productivity growth of Electronics industry, which is considered one of the vibrant and rapidly growing manufacturing industry sub-sectors of India in the liberalization era since 1991. The main objective of the paper is to examine the extent and growth of Total Factor Productivity (TFP) and its components namely, Technical Efficiency Change (TEC) and Technological Progress (TP) and its contribution to total output growth. In this study, the electronics industry is broadly classified into communication equipments, computer hardware, consumer electronics and other electronics, with the purpose of performing a comparative analysis of productivity growth for each of these sub-sectors for the time period 1993-2004. The paper found that the sub-sectors have improved in terms of economies of scale and contribution of capital.The change in technical efficiency and technological progress moved in reverse directions. Three of the four industry witnessed growth in the output primarily due to TFPG and the contribution of input growth to output growth had been negative/negligible, except for Computer hardware where contribution from both input growth and TFPG to output growth were prominent. The paper explored the possible reasons that addressed the issue of low technical efficiency and technological progress in the industry.
Resumo:
Massively parallel SIMD computing is applied to obtain an order of magnitude improvement in the executional speed of an important algorithm in VLSI design automation. The physical design of a VLSI circuit involves logic module placement as a subtask. The paper is concerned with accelerating the well known Min-cut placement technique for logic cell placement. The inherent parallelism of the Min-cut algorithm is identified, and it is shown that a parallel machine based on the efficient execution of the placement procedure.
Resumo:
The performance of a program will ultimately be limited by its serial (scalar) portion, as pointed out by Amdahl′s Law. Reported studies thus far of instruction-level parallelism have mixed data-parallel program portions with scalar program portions, often leading to contradictory and controversial results. We report an instruction-level behavioral characterization of scalar code containing minimal data-parallelism, extracted from highly vectorized programs of the PERFECT benchmark suite running on a Cray Y-MP system. We classify scalar basic blocks according to their instruction mix, characterize the data dependencies seen in each class, and, as a first step, measure the maximum intrablock instruction-level parallelism available. We observe skewed rather than balanced instruction distributions in scalar code and in individual basic block classes of scalar code; nonuniform distribution of parallelism across instruction classes; and, as expected, limited available intrablock parallelism. We identify frequently occurring data-dependence patterns and discuss new instructions to reduce latency. Toward effective scalar hardware, we study latency-pipelining trade-offs and restricted multiple instruction issue mechanisms.
Resumo:
In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
Eklundh's (1972) algorithm to transpose a large matrix stored on an external device such as a disc has been programmed and tested. A simple description of computer implementation is given in this note.
Resumo:
In this paper, the design and implementation of a single shared bus, shared memory multiprocessing system using Intel's single board computers is presented. The hardware configuration and the operating system developed to execute the parallel algorithms are discussed. The performance evaluation studies carried out on Image are outlined.
Resumo:
This paper is about a software system, GRASS-Graphic Software System for 2-D drawing and design—which has been implemented on a PDP-11/35 system with RSX-11M operating system. It is a low cost interactive graphics system for the design of two dimensional drawings and uses a minimum of hardware. It provides comprehensive facilities for creating, editing, storing and retrieving pictures. It has been implemented in the language Pascal and has the potential to be used as a powerful data-imputting tool for a design-automation system. The important features of the system are its low cost, software character generation and a user-trainable character recognizer, which has been included.
Resumo:
Polytypes have been simulated, treating them as analogues of a one-dimensional spin-half Ising chain with competing short-range and infinite-range interactions. Short-range interactions are treated as random variables to approximate conditions of growth from melt as well as from vapour. Besides ordered polytypes up to 12R, short stretches of long-period polytypes (up to 33R) have been observed. Such long-period sequences could be of significance in the context of Frank's theory of polytypism. The form of short-range interactions employed in the study has been justified by carrying out model potential calculations.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
A hot billet in contact with relatively cold dies undergoes rapid cooling in the forging operation. This may give rise to unfilled cavities, poor surface finish and stalling of the press. A knowledge of billet-die temperatures as a function of time is therefore essential for process design. A computer code using finite difference method is written to estimate such temperature histories and validated by comparing the predicted cooling of an integral die-billet configuration with that obtained experimentally.
Resumo:
Using the link-link incidence matrix to represent a simple-jointed kinematic chain algebraic procedures have been developed to determine its structural characteristics such as the type of freedom of the chain, the number of distinct mechanisms and driving mechanisms that can be derived from the chain. A computer program incorporating these graph theory based procedures has been applied successfully for the structural analysis of several typical chains.
Resumo:
It is shown that a leaky aquifer model can be used for well field analysis in hard rock areas, treating the upper weathered and clayey layers as a composite unconfined aquitard overlying a deeper fractured aquifer. Two long-duration pump test studies are reported in granitic and schist regions in the Vedavati river basin. The validity of simplifications in the analytical solution is verified by finite difference computations.
Resumo:
An estimate of the irrigation potential over and above the existing utilization was made based on the ground water potential in the Vedavati river basin. The estimate is based on assumed crops and cropping patterns as per existing practice in the various taluks of the basin. Irrigation potential was estimated talukwise based on the available ground water potential identified from the simulation study. It is estimated that 84,100 hectares of additional land can be brought under irrigation from ground water in the entire basin.