29 resultados para Hardware Acceleration


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new technique named as model predictive spread acceleration guidance (MPSAG) is proposed in this paper. It combines nonlinear model predictive control and spread acceleration guidance philosophies. This technique is then used to design a nonlinear suboptimal guidance law for a constant speed missile against stationary target with impact angle constraint. MPSAG technique can be applied to a class of nonlinear problems, which leads to a closed form solution of the lateral acceleration (latax) history update. Guidance command assumed is the lateral acceleration (latax), applied normal to the velocity vector. The new guidance law is validated by considering the nonlinear kinematics with both lag-free as well as first order autopilot delay. The simulation results show that the proposed technique is quite promising to come up with a nonlinear guidance law that leads to both very small miss distance as well as the desired impact angle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 4ÃÂ4 discrete cosine transform is one of the most important building blocks for the emerging video coding standard, viz. H.264. The conventional implementation does some approximation to the transform matrix elements to facilitate integer arithmetic, for which hardware is suitably prepared. Though the transform coding does not involve any multiplications, quantization process requires sixteen 16-bit multiplications. The algorithm used here eliminates the process of approximation in transform coding and multiplication in the quantization process, by usage of algebraic integer coding. We propose an area-efficient implementation of the transform and quantization blocks based on the algebraic integer coding. The designs were synthesized with 90 nm TSMC CMOS technology and were also implemented on a Xilinx FPGA. The gate counts and throughput achievable in this case are 7000 and 125 Msamples/sec.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, an attempt has been made to evaluate the spatial variation of peak horizontal acceleration (PHA) and spectral acceleration (SA) values at rock level for south India based on the probabilistic seismic hazard analysis (PSHA). These values were estimated by considering the uncertainties involved in magnitude, hypocentral distance and attenuation of seismic waves. Different models were used for the hazard evaluation, and they were combined together using a logic tree approach. For evaluating the seismic hazard, the study area was divided into small grids of size 0.1A degrees A xA 0.1A degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources within a radius of 300 km. Rock level PHA values and SA at 1 s corresponding to 10% probability of exceedance in 50 years were evaluated for all the grid points. Maps showing the spatial variation of rock level PHA values and SA at 1 s for the entire south India are presented in this paper. To compare the seismic hazard for some of the important cities, the seismic hazard curves and the uniform hazard response spectrum (UHRS) at rock level with 10% probability of exceedance in 50 years are also presented in this work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial viscosity in SPH-based computations of impact dynamics is a numerical artifice that helps stabilize spurious oscillations near the shock fronts and requires certain user-defined parameters. Improper choice of these parameters may lead to spurious entropy generation within the discretized system and make it over-dissipative. This is of particular concern in impact mechanics problems wherein the transient structural response may depend sensitively on the transfer of momentum and kinetic energy due to impact. In order to address this difficulty, an acceleration correction algorithm was proposed in Shaw and Reid (''Heuristic acceleration correction algorithm for use in SPH computations in impact mechanics'', Comput. Methods Appl. Mech. Engrg., 198, 3962-3974) and further rationalized in Shaw et al. (An Optimally Corrected Form of Acceleration Correction Algorithm within SPH-based Simulations of Solid Mechanics, submitted to Comput. Methods Appl. Mech. Engrg). It was shown that the acceleration correction algorithm removes spurious high frequency oscillations in the computed response whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. In this paper, we aim at gathering further insights into the acceleration correction algorithm by further exploring its application to problems related to impact dynamics. The numerical evidence in this work thus establishes that, together with the acceleration correction algorithm, SPH can be used as an accurate and efficient tool in dynamic, inelastic structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work the collapsing process of a spherically symmetric star, made of dust cloud, in the background of dark energy is studied for two different gravity theories separately, i.e., DGP Brane gravity and Loop Quantum gravity. Two types of dark energy fluids, namely, Modified Chaplygin gas and Generalised Cosmic Chaplygin gas are considered for each model. Graphs are drawn to characterize the nature and the probable outcome of gravitational collapse. A comparative study is done between the collapsing process in the two different gravity theories. It is found that in case of dark matter, there is a great possibility of collapse and consequent formation of Black hole. In case of dark energy possibility of collapse is far lesser compared to the other cases, due to the large negative pressure of dark energy component. There is an increase in mass of the cloud in case of dark matter collapse due to matter accumulation. The mass decreases considerably in case of dark energy due to dark energy accretion on the cloud. In case of collapse with a combination of dark energy and dark matter, it is found that in the absence of interaction there is a far better possibility of formation of black hole in DGP brane model compared to Loop quantum cosmology model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In large flexible software systems, bloat occurs in many forms, causing excess resource utilization and resource bottlenecks. This results in lost throughput and wasted joules. However, mitigating bloat is not easy; efforts are best applied where savings would be substantial. To aid this we develop an analytical model establishing the relation between bottleneck in resources, bloat, performance and power. Analyses with the model places into perspective results from the first experimental study of the power-performance implications of bloat. In the experiments we find that while bloat reduction can provide as much as 40% energy savings, the degree of impact depends on hardware and software characteristics. We confirm predictions from our model with selected results from our experimental study. Our findings show that a software-only view is inadequate when assessing the effects of bloat. The impact of bloat on physical resource usage and power should be understood for a full systems perspective to properly deploy bloat reduction solutions and reap their power-performance benefits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video decoders used in emerging applications need to be flexible to handle a large variety of video formats and deliver scalable performance to handle wide variations in workloads. In this paper we propose a unified software and hardware architecture for video decoding to achieve scalable performance with flexibility. The light weight processor tiles and the reconfigurable hardware tiles in our architecture enable software and hardware implementations to co-exist, while a programmable interconnect enables dynamic interconnection of the tiles. Our process network oriented compilation flow achieves realization agnostic application partitioning and enables seamless migration across uniprocessor, multi-processor, semi hardware and full hardware implementations of a video decoder. An application quality of service aware scheduler monitors and controls the operation of the entire system. We prove the concept through a prototype of the architecture on an off-the-shelf FPGA. The FPGA prototype shows a scaling in performance from QCIF to 1080p resolutions in four discrete steps. We also demonstrate that the reconfiguration time is short enough to allow migration from one configuration to the other without any frame loss.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hit-to-kill interception of high velocity spiraling target requires accurate state estimation of relative kinematic parameters describing spiralling motion. In this pa- per, spiraling target motion is captured by representing target acceleration through sinusoidal function in inertial frame. A nine state unscented Kalman filter (UKF) formulation is presented here with three relative positions, three relative velocities, spiraling frequency of target, inverse of ballistic coefficient and maneuvering coef-ficient. A key advantage of the target model presented here is that it is of generic nature and can capture spiraling as well as pure ballistic motions without any change of tuning parameters. Extensive Six-DOF simulation experiments, which includes a modified PN guidance and dynamic inversion based autopilot, show that near Hit-to-Kill performance can be obtained with noisy RF seeker measurements of gimbal angles, gimbal angle rates, range and range rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The M-w 8.6 and 8.2 strike-slip earthquakes that struck the northeast Indian Ocean on 11 April 2012 resulted in coseismic deformation both at near and distant sites. The slip distribution, deduced using seismic-wave analysis for the orthogonal faults that ruptured during these earthquakes, is sufficient to predict the coseismic displacements at the Global Positioning System (GPS) sites, such as NTUS, PALK, and CUSV, but fall short at four continuous sites in the Andaman Islands region. Slip modeling, for times prior to the events, suggests that the lower portion of the thrust fault beneath the Andaman Islands has been slipping at least at the rate of 40 cm/yr, in response to the 2004 Sumatra-Andaman coseismic stress change. Modeling of GPS displacements suggests that the en echelon and orthogonal fault ruptures of the 2012 intraplate oceanic earthquakes could have possibly accelerated the ongoing slow slip, along the lower portion of the thrust fault beneath the islands with a month-long slip of 4-10 cm. The misfit to the coseismic GPS displacements along the Andaman Islands could be improved with a better source model, assuming that no local process contributed to this anomaly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Field Programmable Gate Array (FPGA) based hardware accelerator for multi-conductor parasitic capacitance extraction, using Method of Moments (MoM), is presented in this paper. Due to the prohibitive cost of solving a dense algebraic system formed by MoM, linear complexity fast solver algorithms have been developed in the past to expedite the matrix-vector product computation in a Krylov sub-space based iterative solver framework. However, as the number of conductors in a system increases leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products present a time bottleneck, especially for ill-conditioned system matrices. In this work, an FPGA based hardware implementation is proposed to parallelize the iterative matrix solution for multiple RHS vectors in a low-rank compression based fast solver scheme. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple conductors in a Ball Grid Array (BGA) package. Speed-ups up to 13x over equivalent software implementation on an Intel Core i5 processor for dense matrix-vector products and 12x for QR compressed matrix-vector products is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a Field Programmable Gate Array (FPGA)-based hardware accelerator for 3D electromagnetic extraction, using Method of Moments (MoM) is presented. As the number of nets or ports in a system increases, leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products presents a time bottleneck in a linear-complexity fast solver framework. In this work, an FPGA-based hardware implementation is proposed toward a two-level parallelization scheme: (i) matrix level parallelization for single RHS and (ii) pipelining for multiple-RHS. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple nets in a Ball Grid Array (BGA) package. The acceleration is shown to be linearly scalable with FPGA resources and speed-ups over 10x against equivalent software implementation on a 2.4GHz Intel Core i5 processor is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board with the implemented design operating at 200MHz clock frequency. (c) 2016 Wiley Periodicals, Inc. Microwave Opt Technol Lett 58:776-783, 2016