50 resultados para Survey Programs.
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
The research in software science has so far been concentrated on three measures of program complexity: (a) software effort; (b) cyclomatic complexity; and (c) program knots. In this paper we propose a measure of the logical complexity of programs in terms of the variable dependency of sequence of computations, inductive effort in writing loops and complexity of data structures. The proposed complexity mensure is described with the aid of a graph which exhibits diagrammatically the dependence of a computation at a node upon the computation of other (earlier) nodes. Complexity measures of several example programs have been computed and the related issues have been discussed. The paper also describes the role played by data structures in deciding the program complexity.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
An aerobiological survey to study the incidence and concentration of the pollen of Parthenium hysterophorus was conducted in Bangalore, India for a period of one year. This study indicated that Parthenium pollen was present in the atmosphere in significant amounts eigher as single pollen grains or in the form of clumps during the months of June to August.
Resumo:
The environmcnl exerts an important inJuence on the pefirmance of space systems. A brief rel'iew of mo.s/ of the studies, pre.~ented over the past eightem years, relating to the influence ar7d the possible utilization of thc solar radiation pressure &d aero&namic forces, with particular reference to attitude dynamics and control qf satellites is presented here. The semi-passive stabilizers employing rhese forces show p~qmise of long life, low power and economic sjsfems, which though slower in response, compare we1I wit11 the octiw coi~trollers. It is felt that mud more attention is necessary to the actual implema~tution of these ideas and devices: some of which me quite ingenious und unique.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.
Resumo:
Clinical and mycological investigations were made on 225 cases of suspected dermatomycoses. Of these, 102 were microscopically positive. But only 63 were culturally positive, and these are analysed here with regard to clinical patterns and aetiological species, age, sex and occupational incidence and susceptibility to griseofulvin in vitro. As in most other parts of India, Trichophyton rubrum was the dominant species. A high proportion of Epidermophyton floccosum was an unusual feature seen. Of the clinical types, tinea cruris was the most common. The isolates were sensitive to griseofulvin at low concentrations of 1 to 5 μg per ml of agar medium, E. floccosum being the most sensitive.
Resumo:
Cascaded multilevel inverters synthesize a medium-voltage output based on a series connection of power cells which use standard low-voltage component configurations. This characteristic allows one to achieve high-quality output voltages and input currents and also outstanding availability due to their intrinsic component redundancy. Due to these features, the cascaded multilevel inverter has been recognized as an important alternative in the medium-voltage inverter market. This paper presents a survey of different topologies, control strategies and modulation techniques used by these inverters. Regenerative and advanced topologies are also discussed. Applications where the mentioned features play a key role are shown. Finally, future developments are addressed.
Resumo:
This paper makes explicit the relation between relative part position and kinematic freedom of the parts which is implicitly available in the literature. An extensive set of representative papers in the areas of assembly and kinematic modelling is reviewed to specifically identify how the ideas in the two areas are related and influencing the development of each other. The papers are categorised by the approaches followed in the specification, representation, and solution of the part relations. It is observed that the extent of the part geometry is not respected in modelling schemes and as a result, the causal flow of events (proximity–contact–mobility) during the assembling process is not realised in the existing modelling paradigms, which are focusing on either the relative positioning problem or the relative motion problem. Though an assembly is a static description of part configuration, achievement of this configuration requires availability of relative motion for bringing parts together during the assembly process. On the other hand, the kinematic freedom of a part depends on the nature of contacting regions with other parts in its static configuration. These two problems are thus related through the contact geometry. The chronology of the approaches that significantly contributed to the development of the subject is also included in the paper.
Resumo:
In this paper, the work that has been done in several laboratories and academic institutions in India in the area of wind engineering in the past 20–30 years has been reviewed. Studies on extreme and mean hourly winds, philosophies adopted in model studies in wind tunnels and some of the important results that have been obtained are described. Suggestions for future studies are indicated.
Resumo:
A broad numerical survey of relativistic rotating neutron star structures was compiled using an exhaustive list of presently available equation of state models for neutron star matter. The structure parameters (spherical deformations in mass and radii, the moment of inertia and quadrupole moment, oblateness, and free precession) are calculated using the formalism proposed by Hartle and Thorne (1968). The results are discussed in relation to the relevant observational information. Binary pulsar data and X-ray burst sources provide information on the bulk properties of neutron stars, enabling the derivation of constraints that can be put on the structure of neutron stars and equation of state models.
Resumo:
We study the problem of finding a set of constraints of minimum cardinality which when relaxed in an infeasible linear program, make it feasible. We show the problem is NP-hard even when the constraint matrix is totally unimodular and prove polynomial-time solvability when the constraint matrix and the right-hand-side together form a totally unimodular matrix.
Resumo:
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation.