956 resultados para Jacobian arithmetic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let K be a field of characteristic zero and let m(0),..., m(e-1) be a sequence of positive integers. Let C be an algebroid monomial curve in the affine e-space A(K)(e) defined parametrically by X-0 = T-m0,..., Xe-1 = Tme-1 and let A be the coordinate ring of C. In this paper, we assume that some e - 1 terms of m(0),..., m(e-1) form an arithmetic sequence and construct a minimal set of generators for the derivation module Der(K)(A) of A and write an explicit formula for mu (Der(K)(A)).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Use of dipolar and quadrupolar couplings for quantum information processing (QIP) by nuclear magnetic resonance (NMR) is described. In these cases, instead of the individual spins being qubits, the 2(n) energy levels of the spin-system can be treated as an n-qubit system. It is demonstrated that QIP in such systems can be carried out using transition-selective pulses, in (CHCN)-C-3, (CH3CN)-C-13, Li-7 (I = 3/2) and Cs-133 (I = 7/2), oriented in liquid crystals yielding 2 and 3 qubit systems. Creation of pseudopure states, implementation of logic gates and arithmetic operations (half-adder and subtractor) have been carried out in these systems using transition-selective pulses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let K be a field and let m(0),...,m(e-1) be a sequence of positive integers. Let W be a monomial curve in the affine e-space A(K)(e), defined parametrically by X-0 = T-m0,...,Xe-1 = Tme-1 and let p be the defining ideal of W. In this article, we assume that some e-1 terms of m(0), m(e-1) form an arithmetic sequence and produce a Grobner basis for p.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a method to encode a 3D magnetic resonance image data and a decoder in such way that fast access to any 2D image is possible by decoding only the corresponding information from each subband image and thus provides minimum decoding time. This will be of immense use for medical community, because most of the PET and MRI data are volumetric data. Preprocessing is carried out at every level before wavelet transformation, to enable easier identification of coefficients from each subband image. Inclusion of special characters in the bit stream facilitates access to corresponding information from the encoded data. Results are taken by performing Daub4 along x (row), y (column) direction and Haar along z (slice) direction. Comparable results are achieved with the existing technique. In addition to that decoding time is reduced by 1.98 times. Arithmetic coding is used to encode corresponding information independently

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of maintaining information about the rank of a matrix $M$ under changes to its entries. For an $n \times n$ matrix $M$, we show an amortized upper bound of $O(n^{\omega-1})$ arithmetic operations per change for this problem, where $\omega < 2.376$ is the exponent for matrix multiplication, under the assumption that there is a {\em lookahead} of up to $\Theta(n)$ locations. That is, we know up to the next $\Theta(n)$ locations $(i_1,j_1),(i_2,j_2),\ldots,$ whose entries are going to change, in advance; however we do not know the new entries in these locations in advance. We get the new entries in these locations in a dynamic manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 4ÃÂ4 discrete cosine transform is one of the most important building blocks for the emerging video coding standard, viz. H.264. The conventional implementation does some approximation to the transform matrix elements to facilitate integer arithmetic, for which hardware is suitably prepared. Though the transform coding does not involve any multiplications, quantization process requires sixteen 16-bit multiplications. The algorithm used here eliminates the process of approximation in transform coding and multiplication in the quantization process, by usage of algebraic integer coding. We propose an area-efficient implementation of the transform and quantization blocks based on the algebraic integer coding. The designs were synthesized with 90 nm TSMC CMOS technology and were also implemented on a Xilinx FPGA. The gate counts and throughput achievable in this case are 7000 and 125 Msamples/sec.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes techniques to estimate the worst case execution time of executable code on architectures with data caches. The underlying mechanism is Abstract Interpretation, which is used for the dual purposes of tracking address computations and cache behavior. A simultaneous numeric and pointer analysis using an abstraction for discrete sets of values computes safe approximations of access addresses which are then used to predict cache behavior using Must Analysis. A heuristic is also proposed which generates likely worst case estimates. It can be used in soft real time systems and also for reasoning about the tightness of the safe estimate. The analysis methods can handle programs with non-affine access patterns, for which conventional Presburger Arithmetic formulations or Cache Miss Equations do not apply. The precision of the estimates is user-controlled and can be traded off against analysis time. Executables are analyzed directly, which, apart from enhancing precision, renders the method language independent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An all-digital on-chip clock skew measurement system via subsampling is presented. The clock nodes are sub-sampled with a near-frequency asynchronous sampling clock to result in beat signals which are themselves skewed in the same proportion but on a larger time scale. The beat signals are then suitably masked to extract only the skews of the rising edges of the clock signals. We propose a histogram of the arithmetic difference of the beat signals which decouples the relationship of clock jitter to the minimum measurable skew, and allows skews arbitrarily close to zero to be measured with a precision limited largely by measurement time, unlike the conventional XOR based histogram approach. We also analytically show that the proposed approach leads to an unbiased estimate of skew. The measured results from a 65 nm delay measurement front-end indicate that for an input skew range of +/- 1 fan-out-of-4 (FO4) delay, +/- 3 sigma resolution of 0.84 ps can be obtained with an integral error of 0.65 ps. We also experimentally demonstrate that a frequency modulation on a sampling clock maintains precision, indicating the robustness of the technique to jitter. We also show how FM modulation helps in restoring precision in case of rationally related clocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biochemical pathways involving chemical kinetics in medium concentrations (i.e., at mesoscale) of the reacting molecules can be approximated as chemical Langevin equations (CLE) systems. We address the physically consistent non-negative simulation of the CLE sample paths as well as the issue of non-Lipschitz diffusion coefficients when a species approaches depletion and any stiffness due to faster reactions. The non-negative Fully Implicit Stochastic alpha (FIS alpha) method in which stopped reaction channels due to depleted reactants are deleted until a reactant concentration rises again, for non-negativity preservation and in which a positive definite Jacobian is maintained to deal with possible stiffness, is proposed and analysed. The method is illustrated with the computation of active Protein Kinase C response in the Protein Kinase C pathway. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffuse optical tomography (DOT) using near-infrared (NIR) light is a promising tool for noninvasive imaging of deep tissue. This technique is capable of quantitative reconstructions of absorption coefficient inhomogeneities of tissue. The motivation for reconstructing the optical property variation is that it, and, in particular, the absorption coefficient variation, can be used to diagnose different metabolic and disease states of tissue. In DOT, like any other medical imaging modality, the aim is to produce a reconstruction with good spatial resolution and accuracy from noisy measurements. We study the performance of a phase array system for detection of optical inhomogeneities in tissue. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation if the optical parameters of the inhomogeneity are close to the optical properties of the background. The amplitude cancellation method that uses dual out-of-phase sources (phase array) can detect and locate small objects in turbid medium. The inverse problem is solved using model based iterative image reconstruction. Diffusion equation is solved using finite element method for providing the forward model for photon transport. The solution of the forward problem is used for computing the Jacobian and the simultaneous equation is solved using conjugate gradient search. The simulation studies have been carried out and the results show that a phase array system can resolve inhomogeneities with sizes of 5 mm when the absorption coefficient of the inhomogeneity is twice that of the background tissue. To validate this result, a prototype model for performing a dual-source system has been developed. Experiments are carried out by inserting an inhomogeneity of high optical absorption coefficient in an otherwise homogeneous phantom while keeping the scattering coefficient same. The high frequency (100 MHz) modulated dual out-of-phase laser source light is propagated through the phantom. The interference of these sources creates an amplitude null and a phase shift of 180° along a plane between the two sources with a homogeneous object. A solid resin phantom with inhomogeneities simulating the tumor is used in our experiment. The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that the method is able to detect multiple inhomogeneities with sizes of 4 mm. The localization error for a 5 mm inhomogeneity is found to be approximately 1 mm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Near-infrared diffuse optical tomography (DOT) technique has the capability of providing good quantitative reconstruction of tissue absorption and scattering properties with additional inputs such as input and output modulation depths and correction for the photon leakage. We have calculated the two-dimensional (2D) input modulation depth from three-dimensional (3D) diffusion to model the 2D diffusion of photons. The photon leakage when light traverses from phantom to the fiber tip is estimated using a solid angle model. The experiments are carried for single (5 and 6 mm) as well as multiple inhomogeneities (6 and 8 mm) with higher absorption coefficient in a homogeneous phantom. Diffusion equation for photon transport is solved using finite element method and Jacobian is modeled for reconstructing the optical parameters. We study the development and performance of DOT system using modulated single light source and multiple detectors. The dual source methods are reported to have better reconstruction capabilities to resolve and localize single as well as multiple inhomogeneities because of its superior noise rejection capability. However, an experimental setup with dual sources is much more difficult to implement because of adjustment of two out of phase identical light probes symmetrically on either side of the detector during scanning time. Our work shows that with a relatively simpler system with a single source, the results are better in terms of resolution and localization. The experiments are carried out with 5 and 6 mm inhomogeneities separately and 6 and 8 mm inhomogeneities both together with absorption coefficient almost three times as that of the background. The results show that our experimental single source system with additional inputs such as 2D input/output modulation depth and air fiber interface correction is capable of detecting 5 and 6 mm inhomogeneities separately and can identify the size difference of multiple inhomogeneities such as 6 and 8 mm. The localization error is zero. The recovered absorption coefficient is 93% of inhomogeneity that we have embedded in experimental phantom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Null dereferences are a bane of programming in languages such as Java. In this paper we propose a sound, demand-driven, inter-procedurally context-sensitive dataflow analysis technique to verify a given dereference as safe or potentially unsafe. Our analysis uses an abstract lattice of formulas to find a pre-condition at the entry of the program such that a null-dereference can occur only if the initial state of the program satisfies this pre-condition. We use a simplified domain of formulas, abstracting out integer arithmetic, as well as unbounded access paths due to recursive data structures. For the sake of precision we model aliasing relationships explicitly in our abstract lattice, enable strong updates, and use a limited notion of path sensitivity. For the sake of scalability we prune formulas continually as they get propagated, reducing to true conjuncts that are less likely to be useful in validating or invalidating the formula. We have implemented our approach, and present an evaluation of it on a set of ten real Java programs. Our results show that the set of design features we have incorporated enable the analysis to (a) explore long, inter-procedural paths to verify each dereference, with (b) reasonable accuracy, and (c) very quick response time per dereference, making it suitable for use in desktop development environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.