20 resultados para supporting program
Resumo:
The methods of design available for geocell-supported embankments are very few. Two of the earlier methods are considered in this paper and a third method is proposed and compared with them. The first method is the slip line method proposed by earlier researchers. The second method is based on slope stability analysis proposed by this author earlier and the new method proposed is based on the finite element analyses. In the first method, plastic bearing failure of the soil was assumed and the additional resistance due to geocell layer is calculated using a non-symmetric slip line field in the soft foundation soil. In the second method, generalpurpose slope stability program was used to design the geocell mattress of required strength for embankment using a composite model to represent the shear strength of geocell layer. In the third method proposed in this paper, geocell reinforcement is designed based on the plane strain finite element analysis of embankments. The geocell layer is modelled as an equivalent composite layer with modified strength and stiffness values. The strength and dimensions of geocell layer is estimated for the required bearing capacity or permissible deformations. These three design methods are compared through a design example.
Resumo:
A user friendly interactive computer program, CIRDIC, is developed which calculates the molar ellipticity and molar circular dichroic absorption coefficients from the CD spectrum. This, in combination with LOTUS 1-2-3 spread sheet, will give the spectra of above parameters vs wavelength. The code is implemented in MicroSoft FORTRAN 77 which runs on any IBM compatible PC under MSDOS environment.
Resumo:
Biomimetics involves transfer from one or more biological examples to a technical system. This study addresses four questions. What are the essential steps in a biomimetic process? What is transferred? How can the transferred knowledge be structured in a way useful for biologists and engineers? Which guidelines can be given to support transfer in biomimetic design processes? In order to identify the essential steps involved in carrying out biomimetics, several procedures found in the literature were summarized, and four essential steps that are common across these procedures were identified. For identification of mechanisms for transfer, 20 biomimetic examples were collected and modeled according to a model. of causality called the SAPPhIRE model. These examples were then analyzed for identifying the underlying similarity between each biological and corresponding analogue technical system. Based on the SAPPhIRE model, four levels of abstraction at which transfer takes place were identified. Taking into account similarity, the biomimetic examples were assigned to the appropriate levels of abstraction of transfer. Based on the essential steps and the levels of transfer, guidelines for supporting transfer in biomimetic design were proposed and evaluated using design experiments. The 20 biological and analogue technical systems that were analyzed were similar in the physical effects used and at the most abstract levels of description of their functionality, but they were the least similar at the lowest levels of abstraction: the parts involved. Transfer most often was carried out at the physical effect level of abstraction. Compared to a generic set of guidelines based on the literature, the proposed guidelines improved design performance by about 60%. Further, the SAPPhIRE model turned out to be a useful representation for modeling complex biological systems and their functionality. Databases of biological systems, which are structured using the SAPPhIRE model, have the potential to aid biomimetic concept generation.
Resumo:
The method of structured programming or program development using a top-down, stepwise refinement technique provides a systematic approach for the development of programs of considerable complexity. The aim of this paper is to present the philosophy of structured programming through a case study of a nonnumeric programming task. The problem of converting a well-formed formula in first-order logic into prenex normal form is considered. The program has been coded in the programming language PASCAL and implemented on a DEC-10 system. The program has about 500 lines of code and comprises 11 procedures.
Resumo:
A detailed analysis of structural and position dependent characteristic features of helices will give a better understanding of the secondary structure formation in globular proteins. Here we describe an algorithm that quantifies the geometry of helices in proteins on the basis of their C-alpha atoms alone. The Fortran program HELANAL can extract the helices from the PDB files and then characterises the overall geometry of each helix as being linear, curved or kinked, in terms of its local structural features, viz. local helical twist and rise, virtual torsion angle, local helix origins and bending angles between successive local helix axes. Even helices with large radius of curvature are unambiguously identified as being linear or curved. The program can also be used to differentiate a kinked helix and other motifs, such as helix-loop-helix or a helix-turn-helix (with a single residue linker) with the help of local bending angles. In addition to these, the program can also be used to characterise the helix start and end as well as other types of secondary structures.
Resumo:
The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.
Resumo:
Energy consumption has become a major constraint in providing increased functionality for devices with small form factors. Dynamic voltage and frequency scaling has been identified as an effective approach for reducing the energy consumption of embedded systems. Earlier works on dynamic voltage scaling focused mainly on performing voltage scaling when the CPU is waiting for memory subsystem or concentrated chiefly on loop nests and/or subroutine calls having sufficient number of dynamic instructions. This paper concentrates on coarser program regions and for the first time uses program phase behavior for performing dynamic voltage scaling. Program phases are annotated at compile time with mode switch instructions. Further, we relate the Dynamic Voltage Scaling Problem to the Multiple Choice Knapsack Problem, and use well known heuristics to solve it efficiently. Also, we develop a simple integer linear program formulation for this problem. Experimental evaluation on a set of media applications reveal that our heuristic method obtains a 38% reduction in energy consumption on an average, with a performance degradation of 1% and upto 45% reduction in energy with a performance degradation of 5%. Further, the energy consumed by the heuristic solution is within 1% of the optimal solution obtained from the ILP approach.
Resumo:
The program SuSeFLAV is introduced for computing supersymmetric mass spectra with flavour violation in various supersymmetric breaking scenarios with/without see-saw mechanism. A short user guide summarizing the compilation, executables and the input files is provided.
Resumo:
Knowledge about program worst case execution time (WCET) is essential in validating real-time systems and helps in effective scheduling. One popular approach used in industry is to measure execution time of program components on the target architecture and combine them using static analysis of the program. Measurements need to be taken in the least intrusive way in order to avoid affecting accuracy of estimated WCET. Several programs exhibit phase behavior, wherein program dynamic execution is observed to be composed of phases. Each phase being distinct from the other, exhibits homogeneous behavior with respect to cycles per instruction (CPI), data cache misses etc. In this paper, we show that phase behavior has important implications on timing analysis. We make use of the homogeneity of a phase to reduce instrumentation overhead at the same time ensuring that accuracy of WCET is not largely affected. We propose a model for estimating WCET using static worst case instruction counts of individual phases and a function of measured average CPI. We describe a WCET analyzer built on this model which targets two different architectures. The WCET analyzer is observed to give safe estimates for most benchmarks considered in this paper. The tightness of the WCET estimates are observed to be improved for most benchmarks compared to Chronos, a well known static WCET analyzer.