96 resultados para grid computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we have developed a method to compute fractal dimension (FD) of discrete time signals, in the time domain, by modifying the box-counting method. The size of the box is dependent on the sampling frequency of the signal. The number of boxes required to completely cover the signal are obtained at multiple time resolutions. The time resolutions are made coarse by decimating the signal. The loglog plot of total number of boxes required to cover the curve versus size of the box used appears to be a straight line, whose slope is taken as an estimate of FD of the signal. The results are provided to demonstrate the performance of the proposed method using parametric fractal signals. The estimation accuracy of the method is compared with that of Katz, Sevcik, and Higuchi methods. In ddition, some properties of the FD are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CuO nanowires are synthesized by heating Cu foil, rod and grid in ambient without employing a catalyst or gas flow at temperatures ranging from 400 to 800 degrees C for a duration of 1-12 h. Scanning electron microscopy (SEM) investigation reveals the formation of nanowires. The structure, morphology and phase of the as-synthesized nanowires are analyzed by various techniques such as X-ray diffraction (XRD), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA) and Fourier transform infrared spectroscopy (FTIR). It is found that these nanowires are composed of CuO phase and the underlying film is of Cu2O. A systematic study is carried out to find the possibilities for the transformation of one phase to another completely. A possible growth mechanism for the nanowires is also discussed. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The enthalpy method is primarily developed for studying phase change in a multicomponent material, characterized by a continuous liquid volume fraction (phi(1)) vs temperature (T) relationship. Using the Galerkin finite element method we obtain solutions to the enthalpy formulation for phase change in 1D slabs of pure material, by assuming a superficial phase change region (linear (phi(1) vs T) around the discontinuity at the melting point. Errors between the computed and analytical solutions are evaluated for the fluxes at, and positions of, the freezing front, for different widths of the superficial phase change region and spatial discretizations with linear and quadratic basis functions. For Stefan number (St) varying between 0.1 and 10 the method is relatively insensitive to spatial discretization and widths of the superficial phase change region. Greater sensitivity is observed at St = 0.01, where the variation in the enthalpy is large. In general the width of the superficial phase change region should span at least 2-3 Gauss quadrature points for the enthalpy to be computed accurately. The method is applied to study conventional melting of slabs of frozen brine and ice. Regardless of the forms for the phi(1) vs T relationships, the thawing times were found to scale as the square of the slab thickness. The ability of the method to efficiently capture multiple thawing fronts which may originate at any spatial location within the sample, is illustrated with the microwave thawing of slabs and 2D cylinders. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pseudo-spectral method based on Fourier expansions in a Cartesian coordinate system is shown to be an economical method for direct numerical simulation studies of transitional round jets, Several characteristics of the solutions are presented to establish the validity of the solutions in spite of the unnatural choices. We show that neither periodicity, nor the use of a Cartesian system have adversely affected the simulations, Instead, there are benefits in terms of ease of computing and lack of the usual restrictions due to grid structure near the jet axis. By computing the simultaneous evolution of passive scalers, the process of reaction in round jet burners, between a fuel-laden jet and an ambient oxidizer, was also simulated. Some typical solutions are shown and then the results of analysis of these data are summarized. (C) 2001 Elsevier Science Ltd, All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Allgather is an important MPI collective communication. Most of the algorithms for allgather have been designed for homogeneous and tightly coupled systems. The existing algorithms for allgather on Gridsystems do not efficiently utilize the bandwidths available on slow wide-area links of the grid. In this paper, we present an algorithm for allgather on grids that efficiently utilizes wide-area bandwidths and is also wide-area optimal. Our algorithm is also adaptive to gridload dynamics since it considers transient network characteristics for dividing the nodes into clusters. Our experiments on a real-grid setup consisting of 3 sites show that our algorithm gives an average performance improvement of 52% over existing strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper reports the operational experience from a 100 kWe gasification power plant connected to the grid in Karnataka. Biomass Energy for Rural India (BERI) is a program that implemented gasification based power generation with an installed capacity of 0.88 MWe distributed over three locations to meet the electrical energy needs in the district of Tumkur. The operation of one 100 kWe power plant was found unsatisfactory and not meeting the designed performance. The Indian Institute of Science, Bangalore, the technology developer, took the initiative to ensure the system operation, capacity building and prove the designed performance. The power plant connected to the grid consists of the IISc gasification system which includes reactor, cooling, cleaning system, fuel drier and water treatment system to meet the producer gas quality for an engine. The producer gas is used as a fuel in Cummins India Limited, GTA 855 G model, turbo charged engine and the power output is connected to the grid. The system has operated for over 1000 continuous hours, with only about 70 h of grid outages. The total biomass consumption for 1035 h of operation was 111 t at an average of 107 kg/h. Total energy generated was 80.6 MWh reducing over loot of CO(2) emissions. The overall specific fuel consumption was about 1.36 kg/kWh, amounting to an overall efficiency from biomass to electricity of about 18%. The present operations indicate that a maintenance schedule for the plant can be at the end of 1000 h. The results for another 1000 h of operation by the local team are also presented. (C) 2011 International Energy Initiative. Published by Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information forms the basis of modern technology. To meet the ever-increasing demand for information, means have to be devised for a more efficient and better-equipped technology to intelligibly process data. Advances in photonics have made their impact on each of the four key applications in information processing, i.e., acquisition, transmission, storage and processing of information. The inherent advantages of ultrahigh bandwidth, high speed and low-loss transmission has already established fiber-optics as the backbone of communication technology. However, the optics to electronics inter-conversion at the transmitter and receiver ends severely limits both the speed and bit rate of lightwave communication systems. As the trend towards still faster and higher capacity systems continues, it has become increasingly necessary to perform more and more signal-processing operations in the optical domain itself, i.e., with all-optical components and devices that possess a high bandwidth and can perform parallel processing functions to eliminate the electronic bottleneck.