35 resultados para Federal High Performance Computing Program (U.S.)
Resumo:
Can autonomic computing concepts be applied to traditional multi-core systems found in high performance computing environments? In this paper, we propose a novel synergy between parallel computing and swarm robotics to offer a new computing paradigm, `Swarm-Array Computing' that can harness and apply autonomic computing for parallel computing systems. One approach among three proposed approaches in swarm-array computing based on landscapes of intelligent cores, in which the cores of a parallel computing system are abstracted to swarm agents, is investigated. A task gets executed and transferred seamlessly between cores in the proposed approach thereby achieving self-ware properties that characterize autonomic computing. FPGAs are considered as an experimental platform taking into account its application in space robotics. The feasibility of the proposed approach is validated on the SeSAm multi-agent simulator.
Resumo:
This paper describes the design and manufacture of a set of precision cooled (210K) narrow-bandpass filters for the infrared imager and sounder on the Indian Space Research Organisation (ISRO) INSAT-3D meteorological satellite. We discuss the basis for the choice of multilayer coating designs and materials for 21 differing filter channels, together with their temperature-dependence, thin film deposition technologies, substrate metrology, and environmental durability performance. (C) 2008 Optical Society of America.
Resumo:
The High Resolution Dynamics Limb Sounder is described, with particular reference to the atmospheric measurements to be made and the rationale behind the measurement strategy. The demands this strategy places on the filters to be used in the instrument and the designs to which this leads to are described. A second set of filters at an intermediate image plane to reduce "Ghost Imaging" is discussed together with their required spectral properties. A method of combining the spectral characteristics of the primary and secondary filters in each channel are combined together with the spectral response of the detectors and other optical elements to obtain the system spectral response weighted appropriately for the Planck function and atmospheric limb absorption. This method is used to demonstrate whether the out-of-band spectral blocking requirement for a channel is being met and an example calculation is demonstrated showing how the blocking is built up for a representative channel. Finally, the techniques used to produce filters of the necessary sub-millimetre sizes together with the testing methods and procedures used to assess the environmental durability and establish space flight quality are discussed.
Resumo:
Infrared optical-multilayer filters and materials were exposed to the space environment of low Earth orbit on LDEF. This paper summarizes the effects of that environment on the physical and optical properties of the filters and materials flown.
Resumo:
Infrared multilayer interference filters have been used extensively in satellite radiometers for about 15 years. Filters manufactured by the University of Reading have been used in Nimbus 5, 6, and 7, TIROS N, and the Pioneer Venus orbiter. The ability of the filters to withstand the space environment in these applications is critical; if degradation takes place, the effects would range from worsening of signal-to-noise performance to complete system failure. An experiment on the LDEF will enable the filters, for the first time, to be subjected to authoritative spectral measurements following space exposure to ascertain their suitability for spacecraft use and to permit an understanding of degradation mechanisms.
Resumo:
This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 degrees C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Polymer-stabilised liquid crystals are systems in which a small amount of monomer is dissolved within a liquid crystalline host, and then polymerised in situ to produce a network. The progress of the polymerisation, performed within electro-optic cells, was studied by establishing an analytical method novel to these systems. Samples were prepared by photopolymerisation of the monomer under well-defined reaction conditions; subsequent immersion in acetone caused the host and any unreacted monomer to dissolve. High performance liquid chromatography was used to separate and detect the various solutes in the resulting solutions, enabling the amount of unreacted monomer for a given set of conditions to be quantified. Longer irradiations cause a decrease in the proportion of unreacted monomer since more network is formed, while a more uniform LC director alignment (achieved by decreasing the sample thickness) or a higher level of order (achieved by decreasing the polymerisation temperature) promotes faster reactions.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
Active robot force control requires some form of dynamic inner loop control for stability. The author considers the implementation of position-based inner loop control on an industrial robot fitted with encoders only. It is shown that high gain velocity feedback for such a robot, which is effectively stationary when in contact with a stiff environment, involves problems beyond the usual caveats on the effects of unknown environment stiffness. It is shown that it is possible for the controlled joint to become chaotic at very low velocities if encoder edge timing data are used for velocity measurement. The results obtained indicate that there is a lower limit on controlled velocity when encoders are the only means of joint measurement. This lower limit to speed is determined by the desired amount of loop gain, which is itself determined by the severity of the nonlinearities present in the drive system.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Denaturing high-performance liquid chromatography (DHPLC) was evaluated as a rapid screening and identification method for DNA sequence variation detection in the quinolone resistance-determining region of gyrA from Salmonella serovars. A total of 203 isolates of Salmonella were screened using this method. DHPLC analysis of 14 isolates representing each type of novel or multiple mutations and the wild type were compared with LightCycler-based PCR-gyrA hybridization mutation assay (GAMA) and single-strand conformational polymorphism (SSCP) analyses. The 14 isolates gave seven different SSCP patterns, and LightCycler detected four different mutations. DHPLC detected 11 DNA sequence variants at eight different codons, including those detected by LightCycler or SSCP. One of these mutations was silent. Five isolates contained multiple mutations, and four of these could be distinguished from the composite sequence variants by their DHPLC profile. Seven novel mutations were identified at five different loci not previously described in quinolone-resistant salmonella. DHPLC analysis proved advantageous for the detection of novel and multiple mutations. DHPLC also provides a rapid, high-throughput alternative to LightCycler and SSCP for screening frequently occurring mutations.