937 resultados para Tool command language
Resumo:
We describe a compiler for the Flat Concurrent Prolog language on a message passing multiprocessor architecture. This compiler permits symbolic and declarative programming in the syntax of Guarded Horn Rules, The implementation has been verified and tested on the 64-node PARAM parallel computer developed by C-DAC (Centre for the Development of Advanced Computing, India), Flat Concurrent Prolog (FCP) is a logic programming language designed for concurrent programming and parallel execution, It is a process oriented language, which embodies dataflow synchronization and guarded-command as its basic control mechanisms. An identical algorithm is executed on every processor in the network, We assume regular network topologies like mesh, ring, etc, Each node has a local memory, The algorithm comprises of two important parts: reduction and communication, The most difficult task is to integrate the solutions of problems that arise in the implementation in a coherent and efficient manner. We have tested the efficacy of the compiler on various benchmark problems of the ICOT project that have been reported in the recent book by Evan Tick, These problems include Quicksort, 8-queens, and Prime Number Generation, The results of the preliminary tests are favourable, We are currently examining issues like indexing and load balancing to further optimize our compiler.
Resumo:
With the immense growth in the number of available protein structures, fast and accurate structure comparison has been essential. We propose an efficient method for structure comparison, based on a structural alphabet. Protein Blocks (PBs) is a widely used structural alphabet with 16 pentapeptide conformations that can fairly approximate a complete protein chain. Thus a 3D structure can be translated into a 1D sequence of PBs. With a simple Needleman-Wunsch approach and a raw PB substitution matrix, PB-based structural alignments were better than many popular methods. iPBA web server presents an improved alignment approach using (i) specialized PB Substitution Matrices (SM) and (ii) anchor-based alignment methodology. With these developments, the quality of similar to 88% of alignments was improved. iPBA alignments were also better than DALI, MUSTANG and GANGSTA(+) in > 80% of the cases. The webserver is designed to for both pairwise comparisons and database searches. Outputs are given as sequence alignment and superposed 3D structures displayed using PyMol and Jmol. A local alignment option for detecting subs-structural similarity is also embedded. As a fast and efficient `sequence-based' structure comparison tool, we believe that it will be quite useful to the scientific community. iPBA can be accessed at http://www.dsimb.inserm.fr/dsimb_tools/ipba/.
Resumo:
The decision-making process for machine-tool selection and operation allocation in a flexible manufacturing system (FMS) usually involves multiple conflicting objectives. Thus, a fuzzy goal-programming model can be effectively applied to this decision problem. The paper addresses application of a fuzzy goal-programming concept to model the problem of machine-tool selection and operation allocation with explicit considerations given to objectives of minimizing the total cost of machining operation, material handling and set-up. The constraints pertaining to the capacity of machines, tool magazine and tool life are included in the model. A genetic algorithm (GA)-based approach is adopted to optimize this fuzzy goal-programming model. An illustrative example is provided and some results of computational experiments are reported.
Resumo:
As research becomes more and more interdisciplinary, literature search from CD-ROM databases is often carried out on more than one CD-ROM database. This results in retrieving duplicate records due to same literature being covered (indexed) in more than one database. The retrieval software does not identify such duplicate records. Three different programs have been written to accomplish the task of identifying the duplicate records. These programs are executed from a shell script to minimize manual intervention. The various fields that have been used (extracted) to identify the duplicate records include the article title, year, volume number, issue number and pagination. The shell script when executed prompts for input file that may contain duplicate records. The programs identify the duplicate records and write them to a new file.
Resumo:
Inspired by the demonstration that tool-use variants among wild chimpanzees and orangutans qualify as traditions (or cultures), we developed a formal model to predict the incidence of these acquired specializations among wild primates and to examine the evolution of their underlying abilities. We assumed that the acquisition of the skill by an individual in a social unit is crucially controlled by three main factors, namely probability of innovation, probability of socially biased learning, and the prevailing social conditions (sociability, or number of potential experts at close proximity). The model reconfirms the restriction of customary tool use in wild primates to the most intelligent radiation, great apes; the greater incidence of tool use in more sociable populations of orangutans and chimpanzees; and tendencies toward tool manufacture among the most sociable monkeys. However, it also indicates that sociable gregariousness is far more likely to produce the maintenance of invented skills in a population than solitary life, where the mother is the only accessible expert. We therefore used the model to explore the evolution of the three key parameters. The most likely evolutionary scenario is that where complex skills contribute to fitness, sociability and/or the capacity for socially biased learning increase, whereas innovative abilities (i.e., intelligence) follow indirectly. We suggest that the evolution of high intelligence will often be a byproduct of selection on abilities for socially biased learning that are needed to acquire important skills, and hence that high intelligence should be most common in sociable rather than solitary organisms. Evidence for increased sociability during hominin evolution is consistent with this new hypothesis. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Low-pressure MOCVD, with tris(2,4 pentanedionato)aluminum(III) as the precursor, was used in the present investigation to coat alumina on to cemented carbide cutting tools. To evaluate the MOCVD process, the efficiency in cutting operations of MOCVD-coated tools was compared with that of tools coated using the industry-standard CVD process.Three multilayer cemented carbide cutting tool inserts, viz., TiN/TiC/WC, CVD-coated Al2O3 on TiN/TiC/WC, and MOCVD-coated Al2O3 on TiN/TiC/WC, were compared in the dry turning of mild steel. Turning tests were conducted for cutting speeds ranging from 14 to 47 m/min, for a depth of cut from 0.25 to 1 mm, at the constant feed rate of 0.2 mm/min. The axial, tangential, and radial forces were measured using a lathe tool dynamometer for different cutting parameters, and the machined work pieces were tested for surface roughness. The results indicate that, in most of the cases examined, the MOCVD-coated inserts produced a smoother surface finish, while requiring lower cutting forces, indicating that MOCVD produces the best-performing insert, followed by the CVD-coated one. The superior performance of MOCVD-alumina is attributed to the co-deposition of carbon with the oxide, due to the very nature of the precursor used, leading to enhanced mechanical properties for cutting applications in harsh environment.
Resumo:
In this paper, we outline an approach to the task of designing network codes in a non-multicast setting. Our approach makes use of the concept of interference alignment. As an example, we consider the distributed storage problem where the data is stored across the network in n nodes and where a data collector can recover the data by connecting to any k of the n nodes and where furthermore, upon failure of a node, a new node can replicate the data stored in the failed node while minimizing the repair bandwidth.
Resumo:
We present a improved language modeling technique for Lempel-Ziv-Welch (LZW) based LID scheme. The previous approach to LID using LZW algorithm prepares the language pattern table using LZW algorithm. Because of the sequential nature of the LZW algorithm, several language specific patterns of the language were missing in the pattern table. To overcome this, we build a universal pattern table, which contains all patterns of different length. For each language it's corresponding language specific pattern table is constructed by retaining the patterns of the universal table whose frequency of appearance in the training data is above the threshold.This approach reduces the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score[LZW-WDS]) for non native languages and increases the LID performance considerably.
INTACTE: An Interconnect Area, Delay, and Energy Estimation Tool for Microarchitectural Explorations
Resumo:
Prior work on modeling interconnects has focused on optimizing the wire and repeater design for trading off energy and delay, and is largely based on low level circuit parameters. Hence these models are hard to use directly to make high level microarchitectural trade-offs in the initial exploration phase of a design. In this paper, we propose INTACTE, a tool that can be used by architects toget reasonably accurate interconnect area, delay, and power estimates based on a few architecture level parameters for the interconnect such as length, width (in number of bits), frequency, and latency for a specified technology and voltage. The tool uses well known models of interconnect delay and energy taking into account the wire pitch, repeater size, and spacing for a range of voltages and technologies.It then solves an optimization problem of finding the lowest energy interconnect design in terms of the low level circuit parameters, which meets the architectural constraintsgiven as inputs. In addition, the tool also provides the area, energy, and delay for a range of supply voltages and degrees of pipelining, which can be used for micro-architectural exploration of a chip. The delay and energy models used by the tool have been validated against low level circuit simulations. We discuss several potential applications of the tool and present an example of optimizing interconnect design in the context of clustered VLIW architectures. Copyright 2007 ACM.
Resumo:
This paper proposes a new straight forward technique based on dynamic inversion, which is applied for tracking the pilot commands in high performance aircrafts.Pilot commands assumed in longitudinal mode are normal acceleration and total velocity(while roll angle and lateral acceleration are maintained at zero). In lateral mode, roll rate and total velocity are used as pilot commands (while climb rate and lateral acceleration are maintained at zero). Ensuring zero lateral acceleration leads to a better turn co-ordination. A six degree-of-freedom model of F-16 aircraft is used for both control design as well as simulation studies. Promising results are obtained which are found to be superior as compared to an existing approach (which is also based on dynamic inversion). The new approach has two potential benefits, namely reduced oscillatory response and reduced control magnitude. Another advantage of this approach is that it leads to a significant reduction of tuning parameters in the control design process.
Resumo:
We present a new approach to spoken language modeling for language identification (LID) using the Lempel-Ziv-Welch (LZW) algorithm. The LZW technique is applicable to any kind of tokenization of the speech signal. Because of the efficiency of LZW algorithm to obtain variable length symbol strings in the training data, the LZW codebook captures the essentials of a language effectively. We develop two new deterministic measures for LID based on the LZW algorithm namely: (i) Compression ratio score (LZW-CR) and (ii) weighted discriminant score (LZW-WDS). To assess these measures, we consider error-free tokenization of speech as well as artificially induced noise in the tokenization. It is shown that for a 6 language LID task of OGI-TS database with clean tokenization, the new model (LZW-WDS) performs slightly better than the conventional bigram model. For noisy tokenization, which is the more realistic case, LZW-WDS significantly outperforms the bigram technique