27 resultados para typical program
em Indian Institute of Science - Bangalore - Índia
Resumo:
The theoretical aerodynamic characteristics of a typical lifting symmetric supercritical airfoil demonstrating its superiority over thenaca 0012 airfoil from which it was derived are presented in this paper. Further, limited experimental results confirming the theoretical inference are also presented.
Resumo:
Using the link-link incidence matrix to represent a simple-jointed kinematic chain algebraic procedures have been developed to determine its structural characteristics such as the type of freedom of the chain, the number of distinct mechanisms and driving mechanisms that can be derived from the chain. A computer program incorporating these graph theory based procedures has been applied successfully for the structural analysis of several typical chains.
Resumo:
Two typical alternative conformations for double strandee polynucleotides with Watson-Crick base pairing scheme are presented. these types avoid tangling of the chains. Representative models of these types with two different views, to show the similarity and dissimilarity between these models and the Watson-Crick model, are given.
Resumo:
The reduction in natural frequencies,however small, of a civil engineering structure, is the first and the easiest method of estimating its impending damage. As a first level screening for health-monitoring, information on the frequency reduction of a few fundamentalmodes can be used to estimate the positions and the magnitude of damage in a smeared fashion. The paper presents the Eigen value sensitivity equations, derived from first-order perturbation technique, for typical infra-structural systems like a simply supported bridge girder, modelled as a beam, an endbearing pile, modelled as an axial rod and a simply supported plate as a continuum dynamic system. A discrete structure, like a building frame is solved for damage using Eigen-sensitivity derived by a computationalmodel. Lastly, neural network based damage identification is also demonstrated for a simply supported bridge beam, where the known-pairs of damage-frequency vector is used to train a neural network. The performance of these methods under the influence of measurement error is outlined. It is hoped that the developed method could be integrated in a typical infra-structural management program, such that magnitudes of damage and their positions can be obtained using acquired natural frequencies, synthesized from the excited/ambient vibration signatures.
Resumo:
This paper describes an approach for the analysis and design of 765kV/400kV EHV transmission system which is a typical expansion in Indian power grid system, based on the analysis of steady state and transient over voltages. The approach for transmission system design is iterative in nature. The first step involves exhaustive power flow analysis, based on constraints such as right of way, power to be transmitted, power transfer capabilities of lines, existing interconnecting transformer capabilities etc. Acceptable bus voltage profiles and satisfactory equipment loadings during all foreseeable operating conditions for normal and contingency operation are the guiding criteria. Critical operating strategies are also evolved in this initial design phase. With the steady state over voltages obtained, comprehensive dynamic and transient studies are to be carried out including switching over voltages studies. This paper presents steady state and switching transient studies for alternative two typical configurations of 765kV/400 kV systems and the results are compared. Transient studies are carried out to obtain the peak values of 765 kV transmission systems and are compared with the alternative configurations of existing 400 kV systems.
Resumo:
A user friendly interactive computer program, CIRDIC, is developed which calculates the molar ellipticity and molar circular dichroic absorption coefficients from the CD spectrum. This, in combination with LOTUS 1-2-3 spread sheet, will give the spectra of above parameters vs wavelength. The code is implemented in MicroSoft FORTRAN 77 which runs on any IBM compatible PC under MSDOS environment.
Resumo:
Instability of laminated curved composite beams made of repeated sublaminate construction is studied using finite element method. In repeated sublaminate construction, a full laminate is obtained by repeating a basic sublaminate which has a smaller number of plies. This paper deals with the determination of optimum lay-up for buckling by ranking of such composite curved beams (which may be solid or sandwich). For this purpose, use is made of a two-noded, 16 degress of freedom curved composite beam finite element. The displacements u, v, w of the element reference axis are expressed in terms of one-dimensional first-order Hermite interpolation polynomials, and line member assumptions are invoked in formulation of the elastic stiffness matrix and geometric stiffness matrix. The nonlinear expressions for the strains, occurring in beams subjected to axial, flexural and torsional loads, are incorporated in a general instability analysis. The computer program developed has been used, after extensive checking for correctness, to obtain optimum orientation scheme of the plies in the sublaminate so as to achieve maximum buckling load for typical curved solid/sandwich composite beams.
Resumo:
The method of structured programming or program development using a top-down, stepwise refinement technique provides a systematic approach for the development of programs of considerable complexity. The aim of this paper is to present the philosophy of structured programming through a case study of a nonnumeric programming task. The problem of converting a well-formed formula in first-order logic into prenex normal form is considered. The program has been coded in the programming language PASCAL and implemented on a DEC-10 system. The program has about 500 lines of code and comprises 11 procedures.
Resumo:
A detailed analysis of structural and position dependent characteristic features of helices will give a better understanding of the secondary structure formation in globular proteins. Here we describe an algorithm that quantifies the geometry of helices in proteins on the basis of their C-alpha atoms alone. The Fortran program HELANAL can extract the helices from the PDB files and then characterises the overall geometry of each helix as being linear, curved or kinked, in terms of its local structural features, viz. local helical twist and rise, virtual torsion angle, local helix origins and bending angles between successive local helix axes. Even helices with large radius of curvature are unambiguously identified as being linear or curved. The program can also be used to differentiate a kinked helix and other motifs, such as helix-loop-helix or a helix-turn-helix (with a single residue linker) with the help of local bending angles. In addition to these, the program can also be used to characterise the helix start and end as well as other types of secondary structures.
Resumo:
The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.
Resumo:
Buckling of discretely stiffened composite cylindrical panels made of repeated sublaminate construction is studied using a finite element method. In repeated sublaminate construction, a full laminate is obtained by repeating a basic sublaminate, which has a smaller number of plies. This paper deals with the determination of the optimum lay-up for buckling by ranking of such stiffened (longitudinal and hoop) composite cylindrical panels. For this purpose we use the particularized form of a four-noded, 48 degrees of freedom doubly curved quadrilateral thin shell finite element together with a fully compatible two-noded, 16 degrees of freedom composite stiffener element. The computer program developed has been used, after extensive checking for correctness, to obtain an optimum orientation scheme of the plies in the sublaminate so as to achieve maximum buckling load for a specified thickness of typical stiffened composite cylindrical panels.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Energy consumption has become a major constraint in providing increased functionality for devices with small form factors. Dynamic voltage and frequency scaling has been identified as an effective approach for reducing the energy consumption of embedded systems. Earlier works on dynamic voltage scaling focused mainly on performing voltage scaling when the CPU is waiting for memory subsystem or concentrated chiefly on loop nests and/or subroutine calls having sufficient number of dynamic instructions. This paper concentrates on coarser program regions and for the first time uses program phase behavior for performing dynamic voltage scaling. Program phases are annotated at compile time with mode switch instructions. Further, we relate the Dynamic Voltage Scaling Problem to the Multiple Choice Knapsack Problem, and use well known heuristics to solve it efficiently. Also, we develop a simple integer linear program formulation for this problem. Experimental evaluation on a set of media applications reveal that our heuristic method obtains a 38% reduction in energy consumption on an average, with a performance degradation of 1% and upto 45% reduction in energy with a performance degradation of 5%. Further, the energy consumed by the heuristic solution is within 1% of the optimal solution obtained from the ILP approach.