758 resultados para cloud computing.
Resumo:
A Geodesic Constant Method (GCM) is outlined which provides a common approach to ray tracing on quadric cylinders in general, and yields all the surface ray-geometric parameters required in the UTD mutual coupling analysis of conformal antenna arrays in the closed form. The approach permits the incorporation of a shaping parameter which permits the modeling of quadric cylindrical surfaces of desired sharpness/flatness with a common set of equations. The mutual admittance between the slots on a general parabolic cylinder is obtained as an illustration of the applicability of the GCM.
Resumo:
We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.
Resumo:
Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.
Resumo:
In this paper, we have developed a method to compute fractal dimension (FD) of discrete time signals, in the time domain, by modifying the box-counting method. The size of the box is dependent on the sampling frequency of the signal. The number of boxes required to completely cover the signal are obtained at multiple time resolutions. The time resolutions are made coarse by decimating the signal. The loglog plot of total number of boxes required to cover the curve versus size of the box used appears to be a straight line, whose slope is taken as an estimate of FD of the signal. The results are provided to demonstrate the performance of the proposed method using parametric fractal signals. The estimation accuracy of the method is compared with that of Katz, Sevcik, and Higuchi methods. In ddition, some properties of the FD are discussed.
Resumo:
We propose a physical mechanism for the triggering of starbursts in interacting spiral galaxies by shock compression of the pre-existing disk giant molecular clouds (GMCs). We show that as a disk GMC tumbles into the central region of a galaxy following a galactic tidal encounter, it undergoes a radiative shock compression by the pre-existing high pressure of the central molecular intercloud medium. The shocked outer shell of a GMC becomes gravitationally unstable, which results in a burst of star formation in the initially stable GMC. In the case of colliding galaxies with physical overlap such as Arp 244, the cloud compression is shown to occur due to the hot, high-pressure remnant gas resulting from the collisions of atomic hydrogen gas clouds from the two galaxies. The resulting values of infrared luminosity agree with observations. The main mode of triggered star formation is via clusters of stars, thus we can naturally explain the formation of young, luminous star clusters observed in starburst galaxies.
Resumo:
It is now clearly understood that atmospheric aerosols have a significant impact on climate due to their important role in modifying the incoming solar and outgoing infrared radiation. The question of whether aerosol cools (negative forcing) or warms (positive forcing) the planet depends on the relative dominance of absorbing aerosols. Recent investigations over the tropical Indian Ocean have shown that, irrespective of the comparatively small percentage contribution in optical depth (similar to11%), soot has an important role in the overall radiative forcing. However, when the amount of absorbing aerosols such as soot are significant, aerosol optical depth and chemical composition are not the only determinants of aerosol climate effects, but the altitude of the aerosol layer and the altitude and type of clouds are also important. In this paper, the aerosol forcing in the presence of clouds and the effect of different surface types (ocean, soil, vegetation, and different combinations of soil and vegetation) are examined based on model simulations, demonstrating that aerosol forcing changes sign from negative (cooling) to positive (warming) when reflection from below (either due to land or clouds) is high.
Resumo:
Clouds are the largest source of uncertainty in climate science, and remain a weak link in modeling tropical circulation. A major challenge is to establish connections between particulate microphysics and macroscale turbulent dynamics in cumulus clouds. Here we address the issue from the latter standpoint. First we show how to create bench-scale flows that reproduce a variety of cumulus-cloud forms (including two genera and three species), and track complete cloud life cycles-e.g., from a ``cauliflower'' congestus to a dissipating fractus. The flow model used is a transient plume with volumetric diabatic heating scaled dynamically to simulate latent-heat release from phase changes in clouds. Laser-based diagnostics of steady plumes reveal Riehl-Malkus type protected cores. They also show that, unlike the constancy implied by early self-similar plume models, the diabatic heating raises the Taylor entrainment coefficient just above cloud base, depressing it at higher levels. This behavior is consistent with cloud-dilution rates found in recent numerical simulations of steady deep convection, and with aircraft-based observations of homogeneous mixing in clouds. In-cloud diabatic heating thus emerges as the key driver in cloud development, and could well provide a major link between microphysics and cloud- scale dynamics.
Resumo:
Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.
Resumo:
Information forms the basis of modern technology. To meet the ever-increasing demand for information, means have to be devised for a more efficient and better-equipped technology to intelligibly process data. Advances in photonics have made their impact on each of the four key applications in information processing, i.e., acquisition, transmission, storage and processing of information. The inherent advantages of ultrahigh bandwidth, high speed and low-loss transmission has already established fiber-optics as the backbone of communication technology. However, the optics to electronics inter-conversion at the transmitter and receiver ends severely limits both the speed and bit rate of lightwave communication systems. As the trend towards still faster and higher capacity systems continues, it has become increasingly necessary to perform more and more signal-processing operations in the optical domain itself, i.e., with all-optical components and devices that possess a high bandwidth and can perform parallel processing functions to eliminate the electronic bottleneck.