69 resultados para non separable data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes techniques to estimate the worst case execution time of executable code on architectures with data caches. The underlying mechanism is Abstract Interpretation, which is used for the dual purposes of tracking address computations and cache behavior. A simultaneous numeric and pointer analysis using an abstraction for discrete sets of values computes safe approximations of access addresses which are then used to predict cache behavior using Must Analysis. A heuristic is also proposed which generates likely worst case estimates. It can be used in soft real time systems and also for reasoning about the tightness of the safe estimate. The analysis methods can handle programs with non-affine access patterns, for which conventional Presburger Arithmetic formulations or Cache Miss Equations do not apply. The precision of the estimates is user-controlled and can be traded off against analysis time. Executables are analyzed directly, which, apart from enhancing precision, renders the method language independent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-Identical Duplicate video detection is a challenging research problem. Non-Identical Duplicate video are a pair of videos that are not exactly identical but are almost similar.In this paper, we evaluate two methods - Keyframe -based and Tomography-based methods to determine the Non-Identical Duplicate videos. These two methods make use of the existing scale based shift invariant (SIFT) method to find the match between the key frames in first method, and the cross-sections through the temporal axis of the videos in second method.We provide extensive experimental results and the analysis of accuracy and efficiency of the above two methods on a data set of Non- Identical Duplicate video-pair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is an attempt to study crack initiation in nuclear grade, 9Cr-1Mo ferritic steel using AE as an online NDE tool. Laboratory experiments were conducted on 5 heat treated Compact Tension (CT) specimens made out of nuclear grade 9Cr-1Mo ferritic steel by subjecting them to cyclic tensile load. The CT Specimens were of 12.5 mm thickness. The Acoustic emission test system was setup to acquire the data continuously during the test by mounting AE sensor on one of the surfaces of the specimen. This was done to characterize AE data pertaining to crack initiation and then discriminate the samples in terms of their heat treatment processes based on AE data. The AE signatures at crack initiation could conclusively bring to fore the heat treatment distinction on a sample to sample basis in a qualitative sense.Thus, the results obtained through these investigations establish a step forward in utilizing AE technique as an on-line measurement tool for accurate detection and understanding of crack initiation and its profile in 9Cr-1Mo nuclear grade steel subjected to different processes of heat treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Equations for the computation of integral and partial thermodynamic properties of mixing in quarternary systems are derived using data on constituent binary systems and shortest distance composition paths to the binaries. The composition path from a quarternary composition to the i-j binary is characterized by a constant value of (Xi − Xj). The merits of this composition path over others with constant values for View the MathML source or Xi are discussed. Finally the equations are generalized for higher order systems. They are exact for regular solutions, but may be used in a semiempirical mode for non-regular solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efavirenz, (S)-6-chloro-4-(cyclopropylethynyl)-1,4-dihydro-4-(trifluoromethyl)-2H-3 ,1-benzoxazin-2-one, is an anti HIV agent belonging to the class of the non-nucleoside inhibitors of the HIV-1 virus reverse transcriptase. A systematic quantum chemical study of the possible conformations, their relative stabilities and vibrational spectra of efavirenz has been reported. Structural and spectral characteristics of efavirenz have been studied by vibrational spectroscopy and quantum chemical methods. Density functional theory (DFT) calculations for potential energy curve, optimized geometries and vibrational spectra have been carried out using 6-311++G(d,p) basis sets and B3LYP functionals. Based on these results, we have discussed the correlation between the vibrational modes and the crystalline structure of the most stable form of efavirenz. A complete analysis of the experimental infrared and Raman spectra has been reported on the basis of wavenumber of the vibrational bands and potential energy distribution. The infrared and the Raman spectra of the molecule based on OFT calculations show reasonable agreement with the experimental results. The calculated HOMO and LUMO energies shows that charge transfer occur within the molecule. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An easy access to a library of simple organic salts derived from tert-butoxycarbonyl (Boc)-protected L-amino acids and two secondary amines (dicyclohexyl- and dibenzyl amine) are synthesized following a supramolecular synthon rationale to generate a new series of low molecular weight gelators (LMWGs). Out of the 12 salts that we prepared, the nitrobenzene gel of dicyclohexylammonium Boc-glycinate (GLY.1) displayed remarkable load-bearing, moldable and self-healing properties. These remarkable properties displayed by GLY.1 and the inability to display such properties by its dibenzylammonium counterpart (GLY.2) were explained using microscopic and rheological data. Single crystal structures of eight salts displayed the presence of a 1D hydrogen-bonded network (HBN) that is believed to be important in gelation. Powder X-ray diffraction in combination with the single crystal X-ray structure of GLY.1 clearly established the presence of a 1D hydrogen-bonded network in the xerogel of the nitrobenzene gel of GLY.1. The fact that such remarkable properties arising from an easily accessible (salt formation) small molecule are due to supramolecular (non-covalent) interactions is quite intriguing and such easily synthesizable materials may be useful in stress-bearing and other applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the nature of quiet-Sun oscillations using multi-wavelength observations from TRACE, Hinode, and SOHO. The aim is to investigate the existence of propagating waves in the solar chromosphere and the transition region by analyzing the statistical distribution of power in different locations, e.g. in bright magnetic (network), bright non-magnetic and dark non-magnetic (inter-network) regions, separately. We use Fourier power and phase-difference techniques combined with a wavelet analysis. Two-dimensional Fourier power maps were constructed in the period bands 2 -aEuro parts per thousand 4 minutes, 4 -aEuro parts per thousand 6 minutes, 6 -aEuro parts per thousand 15 minutes, and beyond 15 minutes. We detect the presence of long-period oscillations with periods between 15 and 30 minutes in bright magnetic regions. These oscillations were detected from the chromosphere to the transition region. The Fourier power maps show that short-period powers are mainly concentrated in dark regions whereas long-period powers are concentrated in bright magnetic regions. This is the first report of long-period waves in quiet-Sun network regions. We suggest that the observed propagating oscillations are due to magnetoacoustic waves, which can be important for the heating of the solar atmosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memory models for shared-memory concurrent programming languages typically guarantee sequential consistency (SC) semantics for datarace-free (DRF) programs, while providing very weak or no guarantees for non-DRF programs. In effect programmers are expected to write only DRF programs, which are then executed with SC semantics. With this in mind, we propose a novel scalable solution for dataflow analysis of concurrent programs, which is proved to be sound for DRF programs with SC semantics. We use the synchronization structure of the program to propagate dataflow information among threads without requiring to consider all interleavings explicitly. Given a dataflow analysis that is sound for sequential programs and meets certain criteria, our technique automatically converts it to an analysis for concurrent programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The broadcast nature of the wireless medium jeopardizes secure transmissions. Cryptographic measures fail to ensure security when eavesdroppers have superior computational capability; however, it can be assured from information theoretic security approaches. We use physical layer security to guarantee non-zero secrecy rate in single source, single destination multi-hop networks with eavesdroppers for two cases: when eavesdropper locations and channel gains are known and when their positions are unknown. We propose a two-phase solution which consists of finding activation sets and then obtaining transmit powers subject to SINR constraints for the case when eavesdropper locations are known. We introduce methods to find activation sets and compare their performance. Necessary but reasonable approximations are made in power minimization formulations for tractability reasons. For scenarios with no eavesdropper location information, we suggest vulnerability region (the area having zero secrecy rate) minimization over the network. Our results show that in the absence of location information average number of eavesdroppers who have access to data is reduced.