60 resultados para IT capabilities
Resumo:
The coefficient of thermochromism of polyaniline solutions has been found to be solvent dependent and the solvent effect is not negligible. Hence, thermochromism of polyaniline solutions cannot be explained solely on the basis of conformational change induced by a change in temperature. Further, comparison of the solvatochromism of polyaniline and polytoluidine shows a higher solvatochromic shift for the former. It implies that the higher energy associated with the exciton peak of polytoluidine is not due to the higher ring torsional angle induced by the higher steric repulsion of the methyl group, as widely accepted, but is due to its less solvatochromic red-shift as compared to polyaniline.
Resumo:
The evolution of altruism is the central problem of the evolution of eusociality. The evolution of altruism is most likely to be understood by studying species that show altruism in spite of being capable of ''selfish'' individual reproduction. But the definition of eusociality groups together primitively eusocial species where workers retain the ability to reproduce on their own and highly eusocial species where workers have lost reproductive options. At the same time it separates the primitively eusocial species from semisocial species, species that lack life-time sterility and cooperatively breeding birds and mammals, in most of which, altruism and the associated social life are facultative. The definition of eusociality is also such that it is sometimes difficult to decide,what is eusocial and what is not. I therefore suggest that, (1) we expand the scope of eusociality to include semisocial species, primitively eusocial species, highly eusocial species as well as those cooperatively breeding birds and mammals where individuals give up substantial or all personal reproduction for aiding conspecifics, (2) there should be no requirement of overlap of generations or of life-time sterility and (3) the distinction between primitively and highly eusocial should continue, based on the presence or absence of morphological caste differentiation.
Resumo:
This article describes the first comprehensive study on the use of a vinyl polyperoxide, namely poly(styrene peroxide) (PSP), an equimolar alternating copolymer of oxygen and styrene, as a photoinitiator for free radical polymerization of vinyl monomers like styrene. The molecular weight, yield, structure and thermal stability of polystyrene (PS) thus obtained are compared with PS made using a simple peroxide like di-t-butyl peroxide. Interestingly, the PS prepared using PSP contained PSP segments attached to its backbone preferably at the chain ends. This PSP-PS-PSP was further used as a thermal macroinitiator for the preparation of another block copolymer PS-b-PMMA by reacting PSP-PS-PSP with methyl methacrylate (MMA). The mechanism of block copolymerization has been discussed. (C) 1996 John Wiley & Sons, Inc.
Resumo:
This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.
Resumo:
An account is given of the research that has been carried out on mechanical alloying/milling (MA/MM) during the past 25 years. Mechanical alloying, a high energy ball milling process, has established itself as a viable solid state processing route for the synthesis of a variety of equilibrium and non-equilibrium phases and phase mixtures. The process was initially invented for the production of oxide dispersion strengthened (ODS) Ni-base superalloys and later extended to other ODS alloys. The success of MA in producing ODS alloys with better high temperature capabilities in comparison with other processing routes is highlighted. Mechanical alloying has also been successfully used for extending terminal solid solubilities in many commercially important metallic systems. Many high melting intermetallics that are difficult to prepare by conventional processing techniques could be easily synthesised with homogeneous structure and composition by MA. It has also, over the years, proved itself to be superior to rapid solidification processing as a non-equilibrium processing tool. The considerable literature on the synthesis of amorphous, quasicrystalline, and nanocrystalline materials by MA is critically reviewed. The possibility of achieving solid solubility in liquid immiscible systems has made MA a unique process. Reactive milling has opened new avenues for the solid state metallothermic reduction and for the synthesis of nanocrystalline intermetallics and intermetallic matrix composites. Despite numerous efforts, understanding of the process of MA, being far from equilibrium, is far from complete, leaving large scope for further research in this exciting field.
Resumo:
Biomedical engineering solutions like surgical simulators need High Performance Computing (HPC) to achieve real-time performance. Graphics Processing Units (GPUs) offer HPC capabilities at low cost and low power consumption. In this work, it is demonstrated that a liver which is discretized by about 2500 finite element nodes, can be graphically simulated in realtime, by making use of a GPU. Present work takes into consideration the time needed for the data transfer from CPU to GPU and back from GPU to CPU. Although behaviour of liver is very complicated, present computer simulation assumes linear elastostatics. One needs to use the commercial software ANSYS to obtain the global stiffness matrix of the liver. Results show that GPUs are useful for the real-time graphical simulation of liver, which in turn is needed in simulators that are used for training surgeons in laparoscopic surgery. Although the computer simulation should involve rendering also, neither rendering, nor the time needed for rendering and displaying the liver on a screen, is considered in the present work. The present work is just a demonstration of a concept; the concept is not really implemented and validated. Future work is to develop software which can accomplish real-time and very realistic graphical simulation of liver, with rendered image of liver on the screen changing in real-time according to the position of the surgical tool tip approximated as the mouse cursor in 3D.
Resumo:
We review the current status of various aspects of biopolymer translocation through nanopores and the challenges and opportunities it offers. Much of the interest generated by nanopores arises from their potential application to third-generation cheap and fast genome sequencing. Although the ultimate goal of single-nucleotide identification has not yet been reached, great advances have been made both from a fundamental and an applied point of view, particularly in controlling the translocation time, fabricating various kinds of synthetic pores or genetically engineering protein nanopores with tailored properties, and in devising methods (used separately or in combination) aimed at discriminating nucleotides based either on ionic or transverse electron currents, optical readout signatures, or on the capabilities of the cellular machinery. Recently, exciting new applications have emerged, for the detection of specific proteins and toxins (stochastic biosensors), and for the study of protein folding pathways and binding constants of protein-protein and protein-DNA complexes. The combined use of nanopores and advanced micromanipulation techniques involving optical/magnetic tweezers with high spatial resolution offers unique opportunities for improving the basic understanding of the physical behavior of biomolecules in confined geometries, with implications for the control of crucial biological processes such as protein import and protein denaturation. We highlight the key works in these areas along with future prospects. Finally, we review theoretical and simulation studies aimed at improving fundamental understanding of the complex microscopic mechanisms involved in the translocation process. Such understanding is a pre-requisite to fruitful application of nanopore technology in high-throughput devices for molecular biomedical diagnostics.
Resumo:
An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.