958 resultados para Calibration curve
Resumo:
Although the influence of cytochrome P450 inhibitory drugs on the area under the curve (AUC) of cyclosporine (CsA) has been described, data concerning the impact of these substances on the shape of the blood concentration curve are scarce. By assessment of CsA blood levels before and 1, 2, and 4 hr after oral intake (C0, C1, C2, and C4, respectively) CsA profiling examinations were performed in 20 lung transplant recipients taking 400 mg, 200 mg, and no itraconazole, respectively. The three groups showed comparable results for C0, C2, and AUC(0-12). Greater values were found for Cmax, Cmax-C0, peak-trough fluctuation and rise to Cmax in favor of the non-itraconazole group. Additionally, tmax was shorter in the non-itraconazole group. Comedication with the metabolic inhibitor itraconazole is associated with a flattening of the CsA blood concentration profile in lung transplant recipients. These changes cannot be assessed by isolated C0, C2, or AUC(0-12) values alone.
Resumo:
Fluid optimization is a major contributor to improved outcome in patients. Unfortunately, anesthesiologists are often in doubt whether an additional fluid bolus will improve the hemodynamics of the patient or not as excess fluid may even jeopardize the condition. This article discusses physiological concepts of liberal versus restrictive fluid management followed by a discussion on the respective capabilities of various monitors to predict fluid responsiveness. The parameter difference in pulse pressure (dPP), derived from heart-lung interaction in mechanically ventilated patients is discussed in detail. The dPP cutoff value of 13% to predict fluid responsiveness is presented together with several assessment techniques of dPP. Finally, confounding variables on dPP measurements, such as ventilation parameters, pneumoperitoneum and use of norepinephrine are also mentioned.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
The High-Altitude Water Cherenkov (HAWC) Experiment is a gamma-ray observatory that utilizes water silos as Cherenkov detectors to measure the electromagnetic air showers created by gamma rays. The experiment consists of an array of closely packed water Cherenkov detectors (WCDs), each with four photomultiplier tubes (PMTs). The direction of the gamma ray will be reconstructed using the times when the electromagnetic shower front triggers PMTs in each WCD. To achieve an angular resolution as low as 0.1 degrees, a laser calibration system will be used to measure relative PMT response times. The system will direct 300ps laser pulses into two fiber-optic networks. Each network will use optical fan-outs and switches to direct light to specific WCDs. The first network is used to measure the light transit time out to each pair of detectors, and the second network sends light to each detector, calibrating the response times of the four PMTs within each detector. As the relative PMT response times are dependent on the number of photons in the light pulse, neutral density filters will be used to control the light intensity across five orders of magnitude. This system will run both continuously in a low-rate mode, and in a high-rate mode with many intensity levels. In this thesis, the design of the calibration system and systematic studies verifying its performance are presented.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
BACKGROUND: In this paper, we present a new method for the calibration of a microscope and its registration using an active optical tracker. METHODS: Practically, both operations are done simultaneously by moving an active optical marker within the field of view of the two devices. The IR LEDs composing the marker are first segmented from the microscope images. By knowing their corresponding three-dimensional (3D) position in the optical tracker reference system, it is possible to find the transformation matrix between the referential of the two devices. Registration and calibration parameters can be extracted directly from that transformation. In addition, since the zoom and focus can be modified by the surgeon during the operation, we propose a spline based method to update the camera model to the new setup. RESULTS: The proposed technique is currently being used in an augmented reality system for image-guided surgery in the fields of ear, nose and throat (ENT) and craniomaxillofacial surgeries. CONCLUSIONS: The results have proved to be accurate and the technique is a fast, dynamic and reliable way to calibrate and register the two devices in an OR environment.
Resumo:
The combustion strategy in a diesel engine has an impact on the emissions, fuel consumption and the exhaust temperatures. The PM mass retained in the CPF is a function of NO2 and PM concentrations in addition to the exhaust temperatures and the flow rates. Thus the engine combustion strategy affects exhaust characteristics which has an impact on the CPF operation and PM mass retained and oxidized. In this report, a process has been developed to simulate the relationship between engine calibration, performance and HC and PM oxidation in the DOC and CPF respectively. Fuel Rail Pressure (FRP) and Start of Injection (SOI) sweeps were carried out at five steady state engine operating conditions. This data, along with data from a previously carried out surrogate HD-FTP cycle [1], was used to create a transfer function model which estimates the engine out emissions, flow rates, temperatures for varied FRP and SOI over a transient cycle. Four different calibrations (test cases) were considered in this study, which were simulated through the transfer function model and the DOC model [1, 2]. The DOC outputs were then input into a model which simulates the NO2 assisted and thermal PM oxidation inside a CPF. Finally, results were analyzed as to how engine calibration impacts the engine fuel consumption, HC oxidation in the DOC and the PM oxidation in the CPF. Also, active regeneration for various test cases was simulated and a comparative analysis of the fuel penalties involved was carried out.