960 resultados para Calibration curves
Resumo:
Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
As the number of solutions to the Einstein equations with realistic matter sources that admit closed time-like curves (CTC's) has grown drastically, it has provoked some authors [10] to call for a physical interpretation of these seemingly exotic curves that could possibly allow for causality violations. A first step in drafting a physical interpretation would be to understand how CTC's are created because the recent work of [16] has suggested that, to follow a CTC, observers must counter-rotate with the rotating matter, contrary to the currently accepted explanation that it is due to inertial frame dragging that CTC's are created. The exact link between inertialframe dragging and CTC's is investigated by simulating particle geodesics and the precession of gyroscopes along CTC's and backward in time oriented circular orbits in the van Stockum metric, known to have CTC's that could be traversal, so the van Stockum cylinder could be exploited as a time machine. This study of gyroscopeprecession, in the van Stockum metric, supports the theory that CTC's are produced by inertial frame dragging due to rotating spacetime metrics.
Resumo:
Abstract Background and Aims: Data on the influence of calibration on accuracy of continuous glucose monitoring (CGM) are scarce. The aim of the present study was to investigate whether the time point of calibration has an influence on sensor accuracy and whether this effect differs according to glycemic level. Subjects and Methods: Two CGM sensors were inserted simultaneously in the abdomen on either side of 20 individuals with type 1 diabetes. One sensor was calibrated predominantly using preprandial glucose (calibration(PRE)). The other sensor was calibrated predominantly using postprandial glucose (calibration(POST)). At minimum three additional glucose values per day were obtained for analysis of accuracy. Sensor readings were divided into four categories according to the glycemic range of the reference values (low, ≤4 mmol/L; euglycemic, 4.1-7 mmol/L; hyperglycemic I, 7.1-14 mmol/L; and hyperglycemic II, >14 mmol/L). Results: The overall mean±SEM absolute relative difference (MARD) between capillary reference values and sensor readings was 18.3±0.8% for calibration(PRE) and 21.9±1.2% for calibration(POST) (P<0.001). MARD according to glycemic range was 47.4±6.5% (low), 17.4±1.3% (euglycemic), 15.0±0.8% (hyperglycemic I), and 17.7±1.9% (hyperglycemic II) for calibration(PRE) and 67.5±9.5% (low), 24.2±1.8% (euglycemic), 15.5±0.9% (hyperglycemic I), and 15.3±1.9% (hyperglycemic II) for calibration(POST). In the low and euglycemic ranges MARD was significantly lower in calibration(PRE) compared with calibration(POST) (P=0.007 and P<0.001, respectively). Conclusions: Sensor calibration predominantly based on preprandial glucose resulted in a significantly higher overall sensor accuracy compared with a predominantly postprandial calibration. The difference was most pronounced in the hypo- and euglycemic reference range, whereas both calibration patterns were comparable in the hyperglycemic range.