948 resultados para Symbolic Computations
Resumo:
Using a suitable mathematical model, computations of power/follow current in surge diverters (lightning arresters) have been made from the known short-circuit capacity of the power-frequency source and the nonlinear resistor characteristics. Also the effect of the initiation angle is studied. Typical verifications with the available data have been carried out. The influence of arc drop in the surge-diverter spark gap is neglected.
Resumo:
In this paper we have developed methods to compute maps from differential equations. We take two examples. First is the case of the harmonic oscillator and the second is the case of Duffing's equation. First we convert these equations to a canonical form. This is slightly nontrivial for the Duffing's equation. Then we show a method to extend these differential equations. In the second case, symbolic algebra needs to be used. Once the extensions are accomplished, various maps are generated. The Poincare sections are seen as a special case of such generated maps. Other applications are also discussed.
Resumo:
Instruction reuse is a microarchitectural technique that improves the execution time of a program by removing redundant computations at run-time. Although this is the job of an optimizing compiler, they do not succeed many a time due to limited knowledge of run-time data. In this paper we examine instruction reuse of integer ALU and load instructions in network processing applications. Specifically, this paper attempts to answer the following questions: (1) How much of instruction reuse is inherent in network processing applications?, (2) Can reuse be improved by reducing interference in the reuse buffer?, (3) What characteristics of network applications can be exploited to improve reuse?, and (4) What is the effect of reuse on resource contention and memory accesses? We propose an aggregation scheme that combines the high-level concept of network traffic i.e. "flows" with a low level microarchitectural feature of programs i.e. repetition of instructions and data along with an architecture that exploits temporal locality in incoming packet data to improve reuse. We find that for the benchmarks considered, 1% to 50% of instructions are reused while the speedup achieved varies between 1% and 24%. As a side effect, instruction reuse reduces memory traffic and can therefore be considered as a scheme for low power.
Resumo:
We consider a network in which several service providers offer wireless access to their respective subscribed customers through potentially multihop routes. If providers cooperate by jointly deploying and pooling their resources, such as spectrum and infrastructure (e.g., base stations) and agree to serve each others' customers, their aggregate payoffs, and individual shares, may substantially increase through opportunistic utilization of resources. The potential of such cooperation can, however, be realized only if each provider intelligently determines with whom it would cooperate, when it would cooperate, and how it would deploy and share its resources during such cooperation. Also, developing a rational basis for sharing the aggregate payoffs is imperative for the stability of the coalitions. We model such cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition, deployment, and allocation of the channels and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings, i.e., if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each a share that removes any incentive to split from the coalition. The optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time by respectively solving the primals and the duals of the above optimizations, using distributed computations and limited exchange of confidential information among the providers. Numerical evaluations reveal that cooperation substantially enhances individual providers' payoffs under the optimal cooperation strategy and several different payoff sharing rules.
Resumo:
A variable resolution global spectral method is created on the sphere using High resolution Tropical Belt Transformation (HTBT). HTBT belongs to a class of map called reparametrisation maps. HTBT parametrisation of the sphere generates a clustering of points in the entire tropical belt; the density of the grid point distribution decreases smoothly in the domain outside the tropics. This variable resolution method creates finer resolution in the tropics and coarser resolution at the poles. The use of FFT procedure and Gaussian quadrature for the spectral computations retains the numerical efficiency available with the standard global spectral method. Accuracy of the method for meteorological computations are demonstrated by solving Helmholtz equation and non-divergent barotropic vorticity equation on the sphere. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
We associate a sheaf model to a class of Hilbert modules satisfying a natural finiteness condition. It is obtained as the dual to a linear system of Hermitian vector spaces (in the sense of Grothendieck). A refined notion of curvature is derived from this construction leading to a new unitary invariant for the Hilbert module. A division problem with bounds, originating in Douady's privilege, is related to this framework. A series of concrete computations illustrate the abstract concepts of the paper.
Resumo:
A finite-element scheme based on a coupled arbitrary Lagrangian-Eulerian and Lagrangian approach is developed for the computation of interface flows with soluble surfactants. The numerical scheme is designed to solve the time-dependent Navier-Stokes equations and an evolution equation for the surfactant concentration in the bulk phase, and simultaneously, an evolution equation for the surfactant concentration on the interface. Second-order isoparametric finite elements on moving meshes and second-order isoparametric surface finite elements are used to solve these equations. The interface-resolved moving meshes allow the accurate incorporation of surface forces, Marangoni forces and jumps in the material parameters. The lower-dimensional finite-element meshes for solving the surface evolution equation are part of the interface-resolved moving meshes. The numerical scheme is validated for problems with known analytical solutions. A number of computations to study the influence of the surfactants in 3D-axisymmetric rising bubbles have been performed. The proposed scheme shows excellent conservation of fluid mass and of the total mass of the surfactant. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The preference for GarrattBraverman (GB) over MyersSaito (MS) and Schmittel (SCM) cyclizations has recently been demonstrated in sulfones capable of undergoing all three of the processes. As the GB cyclization is a self-quenching process, there is a need to change the selectivity to the non-self-quenching MS or SCM pathway so as to enhance the DNA-cleaving efficiency that operates through the radical-mediated process. Herein we report a conformational constraint-based strategy developed by using computations (M06-2X/6-31+G*) to switch the selectivity from GB to MS/SCM pathway which also results in greater DNA-cleavage activity. The preference for GB could be brought back by easing the constraint with the help of spacers.
Resumo:
In this article, an extension to the total variation diminishing finite volume formulation of the lattice Boltzmann equation method on unstructured meshes was presented. The quadratic least squares procedure is used for the estimation of first-order and second-order spatial gradients of the particle distribution functions. The distribution functions were extrapolated quadratically to the virtual upwind node. The time integration was performed using the fourth-order RungeKutta procedure. A grid convergence study was performed in order to demonstrate the order of accuracy of the present scheme. The formulation was validated for the benchmark two-dimensional, laminar, and unsteady flow past a single circular cylinder. These computations were then investigated for the low Mach number simulations. Further validation was performed for flow past two circular cylinders arranged in tandem and side-by-side. Results of these simulations were extensively compared with the previous numerical data. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.
Resumo:
Artificial viscosity in SPH-based computations of impact dynamics is a numerical artifice that helps stabilize spurious oscillations near the shock fronts and requires certain user-defined parameters. Improper choice of these parameters may lead to spurious entropy generation within the discretized system and make it over-dissipative. This is of particular concern in impact mechanics problems wherein the transient structural response may depend sensitively on the transfer of momentum and kinetic energy due to impact. In order to address this difficulty, an acceleration correction algorithm was proposed in Shaw and Reid (''Heuristic acceleration correction algorithm for use in SPH computations in impact mechanics'', Comput. Methods Appl. Mech. Engrg., 198, 3962-3974) and further rationalized in Shaw et al. (An Optimally Corrected Form of Acceleration Correction Algorithm within SPH-based Simulations of Solid Mechanics, submitted to Comput. Methods Appl. Mech. Engrg). It was shown that the acceleration correction algorithm removes spurious high frequency oscillations in the computed response whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. In this paper, we aim at gathering further insights into the acceleration correction algorithm by further exploring its application to problems related to impact dynamics. The numerical evidence in this work thus establishes that, together with the acceleration correction algorithm, SPH can be used as an accurate and efficient tool in dynamic, inelastic structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The stability of a long unsupported circular tunnel (opening) in a cohesive frictional soil has been assessed with the inclusion of pseudo-static horizontal earthquake body forces. The analysis has been performed under plane strain conditions by using upper bound finite element limit analysis in combination with a linear optimization procedure. The results have been presented in the form of a non-dimensional stability number (gamma H-max/c); where H = tunnel cover, c refers to soil cohesion and gamma(max) is the maximum unit weight of soil mass which the tunnel can support without collapse. The results have been obtained for various values of H/D (D = diameter of the tunnel), internal friction angle (phi) of soil, and the horizontal earthquake acceleration coefficient (alpha(h)). The computations reveal that the values of the stability numbers (i) decrease quite significantly with an increase in alpha(h), and (ii) become continuously higher for greater values of H/D and phi. As expected, the failure zones around the periphery of the tunnel becomes always asymmetrical with an inclusion of horizontal seismic body forces. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this article, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics that suggests such a restricted use of the freeze operator. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets by proposing a symbolic representation of models. This is a quite surprising result in view of the expressive power of the logic since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier. The article also contains developments about the relationships between temporal logics with the freeze operator and counter automata as well as reductions into first-order logics over data words.
Resumo:
This article is concerned with the evolution of haploid organisms that reproduce asexually. In a seminal piece of work, Eigen and coauthors proposed the quasispecies model in an attempt to understand such an evolutionary process. Their work has impacted antiviral treatment and vaccine design strategies. Yet, predictions of the quasispecies model are at best viewed as a guideline, primarily because it assumes an infinite population size, whereas realistic population sizes can be quite small. In this paper we consider a population genetics-based model aimed at understanding the evolution of such organisms with finite population sizes and present a rigorous study of the convergence and computational issues that arise therein. Our first result is structural and shows that, at any time during the evolution, as the population size tends to infinity, the distribution of genomes predicted by our model converges to that predicted by the quasispecies model. This justifies the continued use of the quasispecies model to derive guidelines for intervention. While the stationary state in the quasispecies model is readily obtained, due to the explosion of the state space in our model, exact computations are prohibitive. Our second set of results are computational in nature and address this issue. We derive conditions on the parameters of evolution under which our stochastic model mixes rapidly. Further, for a class of widely used fitness landscapes we give a fast deterministic algorithm which computes the stationary distribution of our model. These computational tools are expected to serve as a framework for the modeling of strategies for the deployment of mutagenic drugs.
Resumo:
A lightning strike in the neighborhood can induce significant currents in tall down conductors. Though the magnitude of induced current in this case is much smaller than that encountered during a direct strike, the probability of occurrence and the frequency content is higher. In view of this, appropriate knowledge on the characteristics of such induced currents is relevant for the scrutiny of the recorded currents and in the evaluation of interference to the electrical and electronic system in the vicinity. Previously, a study was carried out on characteristics of induced currents assuming ideal conditions, that there were no influencing objects in the vicinity of the down conductor and channel. However, some influencing conducting bodies will always be present, such as trees, electricity and communication towers, buildings, and other elevated objects that can affect the induced currents in a down conductor. The present work is carried out to understand the influence of nearby conducting objects on the characteristics of induced currents due to a strike to ground in the vicinity of a tall down conductor. For the study, an electromagnetic model is employed to model the down conductor, channel, and neighboring conducting objects, and Numerical Electromagnetic Code-2 is used for numerical field computations. Neighboring objects of different heights, of different shapes, and at different locations are considered. It is found that the neighboring objects have significant influence on the magnitude and nature of induced currents in a down conductor when the height of the nearby conducting object is comparable to that of the down conductor.