396 resultados para Supercomputer
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]
Resumo:
Computational grids with multiple batch systems (batch grids) can be powerful infrastructures for executing long-running multi-component parallel applications. In this paper, we evaluate the potential improvements in throughput of long-running multi-component applications when the different components of the applications are executed on multiple batch systems of batch grids. We compare the multiple batch executions with executions of the components on a single batch system without increasing the number of processors used for executions. We perform our analysis with a foremost long-running multi-component application for climate modeling, the Community Climate System Model (CCSM). We have built a robust simulator that models the characteristics of both the multi-component application and the batch systems. By conducting large number of simulations with different workload characteristics and queuing policies of the systems, processor allocations to components of the application, distributions of the components to the batch systems and inter-cluster bandwidths, we show that multiple batch executions lead to 55% average increase in throughput over single batch executions for long-running CCSM. We also conducted real experiments with a practical middleware infrastructure and showed that multi-site executions lead to effective utilization of batch systems for executions of CCSM and give higher simulation throughput than single-site executions. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
Background and Purpose: Withanolides are naturally occurring chemical compounds. They are secondary metabolites produced via oxidation of steroids and structurally consist of a steroid-backbone bound to a lactone or its derivatives. They are known to protect plants against herbivores and have medicinal value including anti-inflammation, anti-cancer, adaptogenic and anti-oxidant effects. Withaferin A (Wi-A) and Withanone (Wi-N) are two structurally similar withanolides isolated from Withania somnifera, also known as Ashwagandha in Indian Ayurvedic medicine. Ashwagandha alcoholic leaf extract (i-Extract), rich in Wi-N, was shown to kill cancer cells selectively. Furthermore, the two closely related purified phytochemicals, Wi-A and Wi-N, showed differential activity in normal and cancer human cells in vitro and in vivo. We had earlier identified several genes involved in cytotoxicity of i-Extract in human cancer cells by loss-of-function assays using either siRNA or randomized ribozyme library. Methodology/Principal Findings: In the present study, we have employed bioinformatics tools on four genes, i.e., mortalin, p53, p21 and Nrf2, identified by loss-of-function screenings. We examined the docking efficacy of Wi-N and Wi-A to each of the four targets and found that the two closely related phytochemicals have differential binding properties to the selected cellular targets that can potentially instigate differential molecular effects. We validated these findings by undertaking parallel experiments on specific gene responses to either Wi-N or Wi-A in human normal and cancer cells. We demonstrate that Wi-A that binds strongly to the selected targets acts as a strong cytotoxic agent both for normal and cancer cells. Wi-N, on the other hand, has a weak binding to the targets; it showed milder cytotoxicity towards cancer cells and was safe for normal cells. The present molecular docking analyses and experimental evidence revealed important insights to the use of Wi-A and Wi-N for cancer treatment and development of new anti-cancer phytochemical cocktails.
Resumo:
Effective sharing of the last level cache has a significant influence on the overall performance of a multicore system. We observe that existing solutions control cache occupancy at a coarser granularity, do not scale well to large core counts and in some cases lack the flexibility to support a variety of performance goals. In this paper, we propose Probabilistic Shared Cache Management (PriSM), a framework to manage the cache occupancy of different cores at cache block granularity by controlling their eviction probabilities. The proposed framework requires only simple hardware changes to implement, can scale to larger core count and is flexible enough to support a variety of performance goals. We demonstrate the flexibility of PriSM, by computing the eviction probabilities needed to achieve goals like hit-maximization, fairness and QOS. PriSM-HitMax improves performance by 18.7% over LRU and 11.8% over previously proposed schemes in a sixteen core machine. PriSM-Fairness improves fairness over existing solutions by 23.3% along with a performance improvement of 19.0%. PriSM-QOS successfully achieves the desired QOS targets.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Suite of tools for statistical N-gram language modeling for pattern mining in whole genome sequences
Resumo:
Genome sequences contain a number of patterns that have biomedical significance. Repetitive sequences of various kinds are a primary component of most of the genomic sequence patterns. We extended the suffix-array based Biological Language Modeling Toolkit to compute n-gram frequencies as well as n-gram language-model based perplexity in windows over the whole genome sequence to find biologically relevant patterns. We present the suite of tools and their application for analysis on whole human genome sequence.
Resumo:
A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-l(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information. (C) 2012 Optical Society of America
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]
Resumo:
We have developed a technique to measure the absolute frequencies of optical transitions by using an evacuated Rb-stabilized ring-cavity resonator as a transfer cavity. The absolute frequency of the Rb D-2 line (at 780 nm) used to stabilize the cavity is known and allows us to determine the absolute value of the unknown frequency. We study wavelength-dependent errors due to dispersion at the cavity mirrors by measuring the frequency of the same transition in the Cs D-2 line (at 852 nm) at three cavity lengths. The spread in the values shows that dispersion errors are below 30 kHz, corresponding to a relative precision of 10(-10). We give an explanation for reduced dispersion errors in the ring-cavity geometry by calculating errors due to the lateral shift and the phase shift at the mirrors, and show that they are roughly equal but occur with opposite signs. We have earlier shown that diffraction errors (due to Guoy phase) are negligible in the ring-cavity geometry compared to a linear cavity; the reduced dispersion error is another advantage. Our values are consistent with measurements of the same transition using the more expensive frequency-comb technique. Our simpler method is ideally suited for measuring hyperfine structure, fine structure, and isotope shifts, up to several hundreds of gigahertz.
Resumo:
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Resumo:
Protein structure comparison is essential for understanding various aspects of protein structure, function and evolution. It can be used to explore the structural diversity and evolutionary patterns of protein families. In view of the above, a new algorithm is proposed which performs faster protein structure comparison using the peptide backbone torsional angles. It is fast, robust, computationally less expensive and efficient in finding structural similarities between two different protein structures and is also capable of identifying structural repeats within the same protein molecule.
Resumo:
Monitoring of infrastructural resources in clouds plays a crucial role in providing application guarantees like performance, availability, and security. Monitoring is crucial from two perspectives - the cloud-user and the service provider. The cloud user’s interest is in doing an analysis to arrive at appropriate Service-level agreement (SLA) demands and the cloud provider’s interest is to assess if the demand can be met. To support this, a monitoring framework is necessary particularly since cloud hosts are subject to varying load conditions. To illustrate the importance of such a framework, we choose the example of performance being the Quality of Service (QoS) requirement and show how inappropriate provisioning of resources may lead to unexpected performance bottlenecks. We evaluate existing monitoring frameworks to bring out the motivation for building much more powerful monitoring frameworks. We then propose a distributed monitoring framework, which enables fine grained monitoring for applications and demonstrate with a prototype system implementation for typical use cases.
Resumo:
The Reeb graph of a scalar function tracks the evolution of the topology of its level sets. This paper describes a fast algorithm to compute the Reeb graph of a piecewise-linear (PL) function defined over manifolds and non-manifolds. The key idea in the proposed approach is to maximally leverage the efficient contour tree algorithm to compute the Reeb graph. The algorithm proceeds by dividing the input into a set of subvolumes that have loop-free Reeb graphs using the join tree of the scalar function and computes the Reeb graph by combining the contour trees of all the subvolumes. Since the key ingredient of this method is a series of union-find operations, the algorithm is fast in practice. Experimental results demonstrate that it outperforms current generic algorithms by a factor of up to two orders of magnitude, and has a performance on par with algorithms that are catered to restricted classes of input. The algorithm also extends to handle large data that do not fit in memory.
Resumo:
In this article, we investigate the performance of a volume integral equation code on BlueGene/L system. Volume integral equation (VIE) is solved for homogeneous and inhomogeneous dielectric objects for radar cross section (RCS) calculation in a highly parallel environment. Pulse basis functions and point matching technique is used to convert the volume integral equation into a set of simultaneous linear equations and is solved using parallel numerical library ScaLAPACK on IBM's distributed-memory supercomputer BlueGene/L by different number of processors to compare the speed-up and test the scalability of the code.
Resumo:
With the advances of techniques for RCS reduction, it has become practical to develop aircraft which are invisible to modern day radars. In order to detect such low visible targets it is necessary to explore other phenomenon that contributes to the scattering of incident electromagnetic wave. It is well known from the developments from the clear air scattering using RASS induced acoustic wave could be used to create dielectric constant fluctuation. The scattering from these fluctuations rather than from the aircraft have been observed to enhance the RCS of clear air, under the condition when the incident EM wave is half of the acoustic wave, the condition of Bragg scattering would be met and RCS would be enhanced. For detecting low visibility targets which are at significant distance away from the main radar, inducement of EM fluctuation from acoustic source collocated with the acoustic source is infeasible. However the flow past aircraft produces acoustic disturbances around the aircraft can be exploited to detect low visibility targets. In this paper numerical simulation for RCS enhancement due to acoustic disturbances is presented. In effect, this requires the solution of scattering from 3D inhomogeneous complex shaped bodies. In this volume surface integral equation (VSIE) is used to compute the RCS from fluctuation introduced through the acoustic disturbances. Though the technique developed can be used to study the scattering from radars of any shape and acoustic disturbances of any shape. For illustrative condition, enhancement due to the Bragg scattering are shown to improve the RCS by nearly 30dB, for air synthetic sinusoidal acoustic variation profile for a spherical scattering volume