43 resultados para elliptic curve cryptography


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the influence of cytochrome P450 inhibitory drugs on the area under the curve (AUC) of cyclosporine (CsA) has been described, data concerning the impact of these substances on the shape of the blood concentration curve are scarce. By assessment of CsA blood levels before and 1, 2, and 4 hr after oral intake (C0, C1, C2, and C4, respectively) CsA profiling examinations were performed in 20 lung transplant recipients taking 400 mg, 200 mg, and no itraconazole, respectively. The three groups showed comparable results for C0, C2, and AUC(0-12). Greater values were found for Cmax, Cmax-C0, peak-trough fluctuation and rise to Cmax in favor of the non-itraconazole group. Additionally, tmax was shorter in the non-itraconazole group. Comedication with the metabolic inhibitor itraconazole is associated with a flattening of the CsA blood concentration profile in lung transplant recipients. These changes cannot be assessed by isolated C0, C2, or AUC(0-12) values alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluid optimization is a major contributor to improved outcome in patients. Unfortunately, anesthesiologists are often in doubt whether an additional fluid bolus will improve the hemodynamics of the patient or not as excess fluid may even jeopardize the condition. This article discusses physiological concepts of liberal versus restrictive fluid management followed by a discussion on the respective capabilities of various monitors to predict fluid responsiveness. The parameter difference in pulse pressure (dPP), derived from heart-lung interaction in mechanically ventilated patients is discussed in detail. The dPP cutoff value of 13% to predict fluid responsiveness is presented together with several assessment techniques of dPP. Finally, confounding variables on dPP measurements, such as ventilation parameters, pneumoperitoneum and use of norepinephrine are also mentioned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.