6 resultados para high-order upwind schemes
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
This article centers on the computational performance of the continuous and discontinuous Galerkin time stepping schemes for general first-order initial value problems in R n , with continuous nonlinearities. We briefly review a recent existence result for discrete solutions from [6], and provide a numerical comparison of the two time discretization methods.
Resumo:
One of the most intriguing phenomena in glass forming systems is the dynamic crossover (T(B)), occurring well above the glass temperature (T(g)). So far, it was estimated mainly from the linearized derivative analysis of the primary relaxation time τ(T) or viscosity η(T) experimental data, originally proposed by Stickel et al. [J. Chem. Phys. 104, 2043 (1996); J. Chem. Phys. 107, 1086 (1997)]. However, this formal procedure is based on the general validity of the Vogel-Fulcher-Tammann equation, which has been strongly questioned recently [T. Hecksher et al. Nature Phys. 4, 737 (2008); P. Lunkenheimer et al. Phys. Rev. E 81, 051504 (2010); J. C. Martinez-Garcia et al. J. Chem. Phys. 134, 024512 (2011)]. We present a qualitatively new way to identify the dynamic crossover based on the apparent enthalpy space (H(a)(') = dlnτ/d(1/T)) analysis via a new plot lnH(a)(') vs. 1∕T supported by the Savitzky-Golay filtering procedure for getting an insight into the noise-distorted high order derivatives. It is shown that depending on the ratio between the "virtual" fragility in the high temperature dynamic domain (m(high)) and the "real" fragility at T(g) (the low temperature dynamic domain, m = m(low)) glass formers can be splitted into two groups related to f < 1 and f > 1, (f = m(high)∕m(low)). The link of this phenomenon to the ratio between the apparent enthalpy and activation energy as well as the behavior of the configurational entropy is indicated.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
Trypanosomes show an intriguing organization of their mitochondrial DNA into a catenated network, the kinetoplast DNA (kDNA). While more than 30 proteins involved in kDNA replication have been described, only few components of kDNA segregation machinery are currently known. Electron microscopy studies identified a high-order structure, the tripartite attachment complex (TAC), linking the basal body of the flagellum via the mitochondrial membranes to the kDNA. Here we describe TAC102, a novel core component of the TAC, which is essential for proper kDNA segregation during cell division. Loss of TAC102 leads to mitochondrial genome missegregation but has no impact on proper organelle biogenesis and segregation. The protein is present throughout the cell cycle and is assembled into the newly developing TAC only after the pro-basal body has matured indicating a hierarchy in the assembly process. Furthermore, we provide evidence that the TAC is replicated de novo rather than using a semi-conservative mechanism. Lastly, we demonstrate that TAC102 lacks an N-terminal mitochondrial targeting sequence and requires sequences in the C-terminal part of the protein for its proper localization.