859 resultados para Positive Trigonometric Polynomials


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrochemical capacitors (ECs), also known as supercapacitors or ultracapacitors, are energy storage devices with properties between batteries and conventional capacitors. EC have evolved through several generations. The trend in EC is to combine a double-layer electrode with a battery-type electrode in an asymmetric capacitor configuration. The double-layer electrode is usually an activated carbon (AC) since it has high surface area, good conductivity, and relatively low cost. The battery-type electrode usually consists of PbO2 or Ni(OH)2. In this research, a graphitic carbon foam was impregnated with Co-substituted Ni(OH)2 using electrochemical deposition to serve as the positive electrode in the asymmetric capacitor. The purpose was to reduce the cost and weight of the ECs while maintaining or increasing capacitance and gravimetric energy storage density. The XRD result indicated that the nickel-carbon foam electrode was a typical α-Ni(OH)2. The specific capacitance of the nickel-carbon foam electrode was 2641 F/g at 5 mA/cm2, higher than the previously reported value of 2080 F/g for a 7.5% Al-substituted α-Ni(OH)2 electrode. Three different ACs (RP-20, YP-50F, and Ketjenblack EC-600JD) were evaluated through their morphology and electrochemical performance to determine their suitability for use in ECs. The study indicated that YP-50F demonstrated the better overall performance because of the combination of micropore and mesopore structures. Therefore, YP-50F was chosen to combine with the nickel-carbon foam electrode for further evaluation. Six cells with different mass ratios of negative to positive active mass were fabricated to study the electrochemical performance. Among the different mass ratios, the asymmetric capacitor with the mass ratio of 3.71 gave the highest specific energy and specific power, 24.5 W.h/kg and 498 W/kg, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this report we will investigate the effect of negative energy density in a classic Friedmann cosmology. Although never measured and possibly unphysical, the evolution of a Universe containing a significant cosmological abundance of any of a number of hypothetical stable negative energy components is explored. These negative energy (Ω < 0) forms include negative phantom energy (w<-1), negative cosmological constant (w=-1), negative domain walls (w=-2/3), negative cosmic strings (w= -1/3), negative mass (w=0), negative radiation (w=1/3), and negative ultra-light (w > 1/3). Assuming that such universe components generate pressures as perfect fluids, the attractive or repulsive nature of each negative energy component is reviewed. The Friedmann equations can only be balanced when negative energies are coupled to a greater magnitude of positive energy or positive curvature, and minimal cases of both of these are reviewed. The future and fate of such universes in terms of curvature, temperature, acceleration, and energy density are reviewed including endings categorized as a Big Crunch, Big Void, or Big Rip and further qualified as "Warped", "Curved", or "Flat", "Hot" versus "Cold", "Accelerating" versus" Decelerating" versus "Coasting". A universe that ends by contracting to zero energy density is termed a Big Poof. Which contracting universes ``bounce" in expansion and which expanding universes ``turnover" into contraction are also reviewed. The name by which the ending of the Universe is mentioned is our own nomenclature.