67 resultados para proposed program
em Indian Institute of Science - Bangalore - Índia
Resumo:
The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.
Resumo:
A plane strain elastic interaction analysis of a strip footing resting on a reinforced soil bed has been made by using a combined analytical and finite element method (FEM). In this approach the stiffness matrix for the footing has been obtained using the FEM, For the reinforced soil bed (halfplane) the stiffness matrix has been obtained using an analytical solution. For the latter, the reinforced zone has been idealised as (i) an equivalent orthotropic infinite strip (composite approach) and (ii) a multilayered system (discrete approach). In the analysis, the interface between the strip footing and reinforced halfplane has been assumed as (i) frictionless and (ii) fully bonded. The contact pressure distribution and the settlement reduction have been given for different depths of footing and scheme of reinforcement in soil. The load-deformation behaviour of the reinforced soil obtained using the above modelling has been compared with some available analytical and model test results. The equivalent orthotropic approach proposed in this paper is easy to program and is shown to predict the reinforcing effects reasonably well.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
The research in software science has so far been concentrated on three measures of program complexity: (a) software effort; (b) cyclomatic complexity; and (c) program knots. In this paper we propose a measure of the logical complexity of programs in terms of the variable dependency of sequence of computations, inductive effort in writing loops and complexity of data structures. The proposed complexity mensure is described with the aid of a graph which exhibits diagrammatically the dependence of a computation at a node upon the computation of other (earlier) nodes. Complexity measures of several example programs have been computed and the related issues have been discussed. The paper also describes the role played by data structures in deciding the program complexity.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.
Resumo:
The data obtained in the earlier parts of this series for the donor and acceptor end parameters of N-H. O and O-H. O hydrogen bonds have been utilised to obtain a qualitative working criterion to classify the hydrogen bonds into three categories: "very good" (VG), "moderately good" (MG) and weak (W). The general distribution curves for all the four parameters are found to be nearly of the Gaussian type. Assuming that the VG hydrogen bonds lie between 0 and ± la, MG hydrogen bonds between ± 1 and ± 2, W hydrogen bonds beyond ± 2 (where is the standard deviation), suitable cut-off limits for classifying the hydrogen bonds in the three categories have been derived. These limits are used to get VG and MG ranges for the four parameters 1 and θ (at the donor end) and ± and ± (at the acceptor end). The qualitative strength of a hydrogen bond is decided by the cumulative application of the criteria to all the four parameters. The criterion has been further applied to some practical examples in conformational studies such as α-helix and can be used for obtaining suitable location of hydrogen atoms to form good hydrogen bonds. An empirical approach to the energy of hydrogen bonds in the three categories has also been presented.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.
Resumo:
A new elasto-plastic cracking constitutive model for reinforced concrete is presented. The nonlinear effects considered cover almost all the nonlinearities exhibited by reinforced concrete under short term monotonic loading. They include concrete cracking in tension, plasticity in compression, aggregate interlock, tension softening, elasto-plastic behavior of steel, bond-slip between concrete, and steel reinforcement and tension stiffening. A new procedure for incorporating bondslip in smeared steel elements is described. A modified Huber-Hencky-Mises failure criterion for plastic deformation of concrete, which fits the experimental results under biaxial stresses better, is proposed. Multiple cracking at Gauss points and their opening and closing are considered. Matrix expressions are developed and are incorporated in a nonlinear finite element program. After the objectivity of the model is demonstrated, the model is used to analyze two different types of problems: one, a set of four shear panels, and the other, a reinforced concrete beam without shear reinforcement. The results of the analysis agree favorably with the experimental results.
Resumo:
A user friendly interactive computer program, CIRDIC, is developed which calculates the molar ellipticity and molar circular dichroic absorption coefficients from the CD spectrum. This, in combination with LOTUS 1-2-3 spread sheet, will give the spectra of above parameters vs wavelength. The code is implemented in MicroSoft FORTRAN 77 which runs on any IBM compatible PC under MSDOS environment.
Resumo:
The seismic slope stability analysis of the right abutment of a railway bridge proposed at about 350 m above the ground level, crossing a river and connecting two huge hillocks in the Himalayas, India, is presented in this paper. The rock slopes are composed of highly jointed rock mass and the joint spacing and orientation are varying at different locations. Seismic slope stability analysis of the slope under consideration is carried out using both pseudo-static approach and time response approach as the site is located in seismic zone V as per the earth quake zonation maps of India. Stability of the slope is studied numerically using program FLAC. The results obtained from the pseudo-static analysis are presented in the form of Factor of Safety (FOS) and the results obtained from the time response analysis of the slope are presented in terms of horizontal and vertical displacements along the slope. The results obtained from both the analyses confirmed the global stability of the slope as the FOS in case of pseudo-static analysis is above 1.0 and the displacements observed in case of time response analysis are within the permissible limits. This paper also presents the results obtained from the parametric analysis performed in the case of time response analysis in order to understand the effect of individual parameters on the overall stability of the slope.
Resumo:
The method of structured programming or program development using a top-down, stepwise refinement technique provides a systematic approach for the development of programs of considerable complexity. The aim of this paper is to present the philosophy of structured programming through a case study of a nonnumeric programming task. The problem of converting a well-formed formula in first-order logic into prenex normal form is considered. The program has been coded in the programming language PASCAL and implemented on a DEC-10 system. The program has about 500 lines of code and comprises 11 procedures.
Resumo:
A detailed analysis of structural and position dependent characteristic features of helices will give a better understanding of the secondary structure formation in globular proteins. Here we describe an algorithm that quantifies the geometry of helices in proteins on the basis of their C-alpha atoms alone. The Fortran program HELANAL can extract the helices from the PDB files and then characterises the overall geometry of each helix as being linear, curved or kinked, in terms of its local structural features, viz. local helical twist and rise, virtual torsion angle, local helix origins and bending angles between successive local helix axes. Even helices with large radius of curvature are unambiguously identified as being linear or curved. The program can also be used to differentiate a kinked helix and other motifs, such as helix-loop-helix or a helix-turn-helix (with a single residue linker) with the help of local bending angles. In addition to these, the program can also be used to characterise the helix start and end as well as other types of secondary structures.
Resumo:
This paper introduces CSP-like communication mechanisms into Backus’ Functional Programming (FP) systems extended by nondeterministic constructs. Several new functionals are used to describe nondeterminism and communication in programs. The functionals union and restriction are introduced into FP systems to develop a simple algebra of programs with nondeterminism. The behaviour of other functionals proposed in this paper are characterized by the properties of union and restriction. The axiomatic semantics of communication constructs are presented. Examples show that it is possible to reason about a communicating program by first transforming it into a non-communicating program by using the axioms of communication, and then reasoning about the resulting non-communicating version of the program. It is also shown that communicating programs can be developed from non-communicating programs given as specifications by using a transformational approach.