876 resultados para linear interpolation
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Electromagnetic tracker feasibility in the design of a dental superstructure for edentulous patients
Resumo:
The success of the osseointegration concept and the Brånemark protocol is highly associated to the accuracy in the production of an implant-supported prosthesis. One of most critical steps for long-term success of these prosthesis is the accuracy obtained during the impression procedure, which is affected by factors such as the impression material, implant position, angulation and depth. This paper investigates the feasibility of 3D electromagnetic motion tracking systems as an acquisition method for modeling full-arch implant-supported prosthesis. To this extent, we propose an implant acquisition method at the patient mouth and a calibration procedure, based on a 3D electromagnetic tracker that obtains combined measurements of implant’s position and angulation, eliminating the use of any impression material. Three calibration algorithms (namely linear interpolation, higher-order polynomial and Hardy multiquadric) were tested to compensate for the electromagnetic tracker distortions introduced by the presence of nearby metals. Moreover, implants from different suppliers were also tested to study its impact on tracking accuracy. The calibration methodology and the algorithms employed proved to implement a suitable strategy for the evaluation of novel dental impression techniques. However, in the particular case of the evaluated electromagnetic tracking system, the order of magnitude of the obtained errors invalidates its use for the full-arch modeling of implant-supported prosthesis.
Resumo:
Background: An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). Methods: A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. Conclusions: The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.
Resumo:
Sensor/actuator networks promised to extend automated monitoring and control into industrial processes. Avionic system is one of the prominent technologies that can highly gain from dense sensor/actuator deployments. An aircraft with smart sensing skin would fulfill the vision of affordability and environmental friendliness properties by reducing the fuel consumption. Achieving these properties is possible by providing an approximate representation of the air flow across the body of the aircraft and suppressing the detected aerodynamic drags. To the best of our knowledge, getting an accurate representation of the physical entity is one of the most significant challenges that still exists with dense sensor/actuator network. This paper offers an efficient way to acquire sensor readings from very large sensor/actuator network that are located in a small area (dense network). It presents LIA algorithm, a Linear Interpolation Algorithm that provides two important contributions. First, it demonstrates the effectiveness of employing a transformation matrix to mimic the environmental behavior. Second, it renders a smart solution for updating the previously defined matrix through a procedure called learning phase. Simulation results reveal that the average relative error in LIA algorithm can be reduced by as much as 60% by exploiting transformation matrix.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Mecânica
Resumo:
INTRODUCTION. Both hypocapnia and hypercapnia can be deleterious to brain injured patients. Strict PaCO2 control is difficult to achieve because of patient's instability and unpredictable effects of ventilator settings changes. OBJECTIVE. The aim of this study was to evaluate our ability to comply with a protocol of controlled mechanical ventilation (CMV) aiming at a PaCO2 between 35 and 40 mmHg in patients requiring neuro-resuscitation. METHODS. Retrospective analysis of consecutive patients (2005-2011) requiring intracranial pressure (ICP) monitoring for traumatic brain injury (TBI), subarachnoid haemorrhage (SAH), intracranial haemorrhage (ICH) or ischemic stroke (IS). Demographic data, GCS, SAPS II, hospital mortality, PaCO2 and ICP values were recorded. During CMV in the first 48 h after admission, we analyzed the time spent within the PaCO2 target in relation to the presence or absence of intracranial hypertension (ICP[20 mmHg, by periods of 30 min) (Table 1). We also compared the fraction of time (determined by linear interpolation) spent with normal, low or high PaCO2 in hospital survivors and non-survivors (Wilcoxon, Bonferroni correction, p\0.05) (Table 2). PaCO2 samples collected during and after apnoea tests were excluded. Results given as median [IQR]. RESULTS. 436 patients were included (TBI: 51.2 %, SAH: 20.6 %, ICH: 23.2 %, IS: 5.0 %), age: 54 [39-64], SAPS II score: 52 [41-62], GCS: 5 [3-8]. 8744 PaCO2 samples were collected during 150611 h of CMV. CONCLUSIONS. Despite a high number of PaCO2 samples collected (in average one sample every 107 min), our results show that patients undergoing CMV for neuro- resuscitation spent less than half of the time within the pre-defined PaCO2 range. During documented intracranial hypertension, hypercapnia was observed in 17.4 % of the time. Since non-survivors spent more time with hypocapnia, further analysis is required to determine whether hypocapnia was detrimental per se, or merely reflects increased severity of brain insult.
Resumo:
INTRODUCTION. Reduced cerebral perfusion pressure (CPP) may worsen secondary damage and outcome after severe traumatic brain injury (TBI), however the optimal management of CPP is still debated. STUDY HYPOTHESIS: We hypothesized that the impact of CPP on outcome is related to brain tissue oxygen tension (PbtO2) level and that reduced CPP may worsen TBI prognosis when it is associated with brain hypoxia. DESIGN. Retrospective analysis of prospective database. METHODS. We analyzed 103 patients with severe TBI who underwent continuous PbtO2 and CPP monitoring for an average of 5 days. For each patient, duration of reduced CPP (\60 mm Hg) and brain hypoxia (PbtO2\15 mm Hg for[30 min [1]) was calculated with linear interpolation method and the relationship between CPP and PbtO2 was analyzed with Pearson's linear correlation coefficient. Outcome at 30 days was assessed with the Glasgow Outcome Score (GOS), dichotomized as good (GOS 4-5) versus poor (GOS 1-3). Multivariable associations with outcome were analyzed with stepwise forward logistic regression. RESULTS. Reduced CPP (n=790 episodes; mean duration 10.2 ± 12.3 h) was observed in 75 (74%) patients and was frequently associated with brain hypoxia (46/75; 61%). Episodes where reduced CPP were associated with normal brain oxygen did not differ significantly between patients with poor versus those with good outcome (8.2 ± 8.3 vs. 6.5 ± 9.7 h; P=0.35). In contrast, time where reduced CPP occurred simultaneously with brain hypoxia was longer in patients with poor than in those with good outcome (3.3±7.4 vs. 0.8±2.3 h; P=0.02). Outcome was significantly worse in patients who had both reduced CPP and brain hypoxia (61% had GOS 1-3 vs. 17% in those with reduced CPP but no brain hypoxia; P\0.01). Patients in whom a positive CPP-PbtO2 correlation (r[0.3) was found also were more likely to have poor outcome (69 vs. 31% in patients with no CPP-PbtO2 correlation; P\0.01). Brain hypoxia was an independent risk factor of poor prognosis (odds ratio for favorable outcome of 0.89 [95% CI 0.79-1.00] per hour spent with a PbtO2\15 mm Hg; P=0.05, adjusted for CPP, age, GCS, Marshall CT and APACHE II). CONCLUSIONS. Low CPP may significantly worsen outcome after severe TBI when it is associated with brain tissue hypoxia. PbtO2-targeted management of CPP may optimize TBI therapy and improve outcome of head-injured patients.
Resumo:
This paper presents a comparative analysis of linear and mixed modelsfor short term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay.The series is interpolated with a linear predictor which minimizes theforecast mean square error. The linear models are seasonal ARIMA models and themixed models have a linear component and a non linear seasonal component.The non linear component is estimated by a non parametric regression of dataversus time. Short term forecasts, no more than two days ahead, are of interestbecause they can be used by the port authorities to notice the fleet.Several models are fitted and compared by their forecasting behavior.
Resumo:
Introduction: Low brain tissue oxygen pressure (PbtO2) is associated with worse outcome in patients with severe traumatic brain injury (TBI). However, it is unclear whether brain tissue hypoxia is merely a marker of injury severity or a predictor of prognosis, independent from intracranial pressure (ICP) and injury severity. Hypothesis: We hypothesized that brain tissue hypoxia was an independent predictor of outcome in patients wih severe TBI, irrespective of elevated ICP and of the severity of cerebral and systemic injury. Methods: This observational study was conducted at the Neurological ICU, Hospital of the University of Pennsylvania, an academic level I trauma center. Patients admitted with severe TBI who had PbtO2 and ICP monitoring were included in the study. PbtO2, ICP, mean arterial pressure (MAP) and cerebral perfusion pressure (CPP = MAP-ICP) were monitored continuously and recorded prospectively every 30 min. Using linear interpolation, duration and cumulative dose (area under the curve, AUC) of brain tissue hypoxia (PbtO2 < 15 mm Hg), elevated ICP >20 mm Hg and low CPP <60 mm Hg were calculated, and the association with outcome at hospital discharge, dichotomized as good (Glasgow Outcome Score [GOS] 4-5) vs. poor (GOS 1-3), was analyzed. Results: A total of 103 consecutive patients, monitored for an average of 5 days, was studied. Brain tissue hypoxia was observed in 66 (64%) patients despite ICP was < 20 mm Hg and CPP > 60 mm Hg (72 +/- 39% and 49 +/- 41% of brain hypoxic time, respectively). Compared with patients with good outcome, those with poor outcome had a longer duration of brain hypoxia (1.7 +/- 3.7 vs. 8.3 +/- 15.9 hrs, P<0.01), as well as a longer duration (11.5 +/- 16.5 vs. 21.6 +/- 29.6 hrs, P=0.03) and a greater cumulative dose (56 +/- 93 vs. 143 +/- 218 mm Hg*hrs, P<0.01) of elevated ICP. By multivariable logistic regression, admission Glasgow Coma Scale (OR, 0.83, 95% CI: 0.70-0.99, P=0.04), Marshall CT score (OR 2.42, 95% CI: 1.42-4.11, P<0.01), APACHE II (OR 1.20, 95% CI: 1.03-1.43, P=0.03), and the duration of brain tissue hypoxia (OR 1.13; 95% CI: 1.01-1.27; P=0.04) were all significantly associated with poor outcome. No independent association was found between the AUC for elevated ICP and outcome (OR 1.01, 95% CI 0.97-1.02, P=0.11) in our prospective cohort. Conclusions: In patients with severe TBI, brain tissue hypoxia is frequent, despite normal ICP and CPP, and is associated with poor outcome, independent of intracranial hypertension and the severity of cerebral and systemic injury. Our findings indicate that PbtO2 is a strong physiologic prognostic marker after TBI. Further study is warranted to examine whether PbtO2-directed therapy improves outcome in severely head-injured patients .
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
The unifying objective of Phases I and II of this study was to determine the feasibility of the post-tensioning strengthening method and to implement the technique on two composite bridges in Iowa. Following completion of these two phases, Phase III was undertaken and is documented in this report. The basic objectives of Phase III were further monitoring bridge behavior (both during and after post-tensioning) and developing a practical design methodology for designing the strengthening system under investigation. Specific objectives were: to develop strain and force transducers to facilitate the collection of field data; to investigate further the existence and effects of the end restraint on the post-tensioning process; to determine the amount of post-tensioning force loss that occurred during the time between the initial testing and the retesting of the existing bridges; to determine the significance of any temporary temperature-induced post-tensioning force change; and to develop a simplified design methodology that would incorporate various variables such as span length, angle-of-skew, beam spacing, and concrete strength. Experimental field results obtained during Phases II and III were compared to the theoretical results and to each other. Conclusions from this research are as follows: (1) Strengthening single-span composite bridges by post-tensioning is a viable, economical strengthening technique. (2) Behavior of both bridges was similar to the behavior observed from the bridges during field tests conducted under Phase II. (3) The strain transducers were very accurate at measuring mid-span strain. (4) The force transducers gave excellent results under laboratory conditions, but were found to be less effective when used in actual bridge tests. (5) Loss of post-tensioning force due to temperature effects in any particular steel beam post-tensioning tendon system were found to be small. (6) Loss of post-tensioning force over a two-year period was minimal. (7) Significant end restraint was measured in both bridges, caused primarily by reinforcing steel being continuous from the deck into the abutments. This end restraint reduced the effectiveness of the post-tensioning but also reduced midspan strains due to truck loadings. (8) The SAP IV finite element model is capable of accurately modeling the behavior of a post-tensioned bridge, if guardrails and end restraints are included in the model. (9) Post-tensioning distribution should be separated into distributions for the axial force and moment components of an eccentric post-tensioning force. (10) Skews of 45 deg or less have a minor influence on post-tensioning distribution. (11) For typical Iowa three-beam and four-beam composite bridges, simple regression-derived formulas for force and moment fractions can be used to estimate post-tensioning distribution at midspan. At other locations, a simple linear interpolation gives approximately correct results. (12) A simple analytical model can accurately estimate the flexural strength of an isolated post-tensioned composite beam.
Resumo:
The use of chemicals is a critical part of a pro-active winter maintenance program. However, ensuring that the correct chemicals are used is a challenge. On the one hand, budgets are limited, and thus price of chemicals is a major concern. On the other, performance of chemicals, especially at lower pavement temperatures, is not always assured. Two chemicals that are used extensively by the Iowa Department of Transportation (Iowa DOT) are sodium chloride (or salt) and calcium chloride. While calcium chloride can be effective at much lower temperatures than salt, it is also considerably more expensive. Costs for a gallon of salt brine are typically in the range of $0.05 to $0.10, whereas calcium chloride brine may cost in the range of $1.00 or more per gallon. These costs are of course subject to market forces and will thus change from year to year. The idea of mixing different winter maintenance chemicals is by no means new, and in general discussions it appears that many winter maintenance personnel have from time to time mixed up a jar of chemicals and done some work around the yard to see whether or not their new mix “works.” There are many stories about the mixture turning to “mayonnaise” (or, more colorfully, to “snot”) suggesting that mixing chemicals may give rise to some problems most likely due to precipitation. Further, the question of what constitutes a mixture “working” in this context is a topic of considerable discussion. In this study, mixtures of salt brine and calcium chloride brine were examined to determine their ice melting capability and their freezing point. Using the results from these tests, a linear interpolation model of the ice melting capability of mixtures of the two brines has been developed. Using a criterion based upon the ability of the mixture to melt a certain thickness of ice or snow (expressed as a thickness of melt-water equivalent), the model was extended to develop a material cost per lane mile for the full range of possible mixtures as a function of temperature. This allowed for a comparison of the performance of the various mixtures. From the point of view of melting capacity, mixing calcium chloride brine with salt brine appears to be effective only at very low temperatures (around 0° F and below). However, the approach described herein only considers the material costs, and does not consider application costs or other aspects of the mixture performance than melting capacity. While a unit quantity of calcium chloride is considerably more expensive than a unit quantity of sodium chloride, it also melts considerably more ice. In other words, to achieve the same result, much less calcium chloride brine is required than sodium chloride brine. This is important in considering application costs, because it means that a single application vehicle (for example, a brine dispensing trailer towed behind a snowplow) can cover many more lane miles with calcium chloride brine than with salt brine before needing to refill. Calculating exactly how much could be saved in application costs requires an optimization of routes used in the application of liquids in anti-icing, which is beyond the scope of the current study. However, this may be an area that agencies wish to pursue for future investigation. In discussion with winter maintenance personnel who use mixtures of sodium chloride and calcium chloride, it is evident that one reason for this is because the mixture is much more persistent (i.e. it stays longer on the road surface) than straight salt brine. Operationally this persistence is very valuable, but at present there are not any established methods to measure the persistence of a chemical on a pavement. In conclusion, the study presents a method that allows an agency to determine the material costs of using various mixtures of salt brine and calcium chloride brine. The method is based upon the requirement of melting a certain quantity of snow or ice at the ice-pavement interface, and on how much of a chemical or of a mixture of chemicals is required to do that.