992 resultados para Code-phase estimation
Resumo:
Rationale: NAVA is an assisted ventilatory mode that uses the electrical activity of the diaphragm (Edi) to trigger and cycle the ventilator, and to offer inspiratory assistance in proportion to patient effort. Since Edi varies from breath to breath, airway pressure and tidal volume also vary according to the patient's breathing pattern. Our objective was to compare the variability of NAVA with PSV in mechanically ventilated patients during the weaning phase. Methods: We analyzed the data collected for a clinical trial that compares PSV and NAVA during spontaneous breathing trials using PSV, with PS of 5 cmH2O, and NAVA, with Nava level titrated to generate a peak airway pressure equivalent to PSV of 5 cmH2O (NCT01137271). We captured flow, airway pressure and Edi at 100Hz from the ventilator using a dedicated software (Servo Tracker v2, Maquet, Sweden), and processed the cycles using a MatLab (Mathworks, USA) code. The code automatically detects the tidal volume (Vt), respiratory rate (RR), Edi and Airway pressure (Paw) on a breath-by-breath basis for each ventilatory mode. We also calculated the coefficient of variation (standard deviation, SD, divided by the mean). Results: We analyzed data from eleven patients. The mean Vt was similar on both modes (370 ±70 for Nava and 347± 77 for PSV), the RR was 26±6 for Nava and 26±7 or PSV. Paw was higher for Nava than for PSV (14±1 vs 11±0.4, p=0.0033), and Edi was similar for both modes (12±8 for Nava and 11±6 for PSV). The variability of the respiratory pattern, assessed with the coefficient of variation, was larger for Nava than for PSV for the Vt ( 23%±1% vs 15%±1%, p=0.03) and Paw (17%±1% vs 1% ±0.1%, p=0.0033), but not for RR (21% ±1% vs 16% ±8%, p=0.050) or Edi (33%±14% vs 39% ±16%,p=0.07). Conclusion: The variability of the breathing pattern is high during spontaneous breathing trials independent of the ventilatory mode. This variability results in variability of airway pressure and tidal volume, which are higher on Nava than on PSV. Our results suggest that Nava better reflects the normal variability of the breathing pattern during assisted mechanical ventilation.
Resumo:
The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.
Resumo:
KIVA is an open Computational Fluid Dynamics (CFD) source code that is capable to compute the transient two and three-dimensional chemically reactive fluid flows with spray. The latest version in the family of KIVA codes is the KIVA-4 which is capable of handling the unstructured mesh. This project focuses on the implementation of the Conjugate Heat Transfer code (CHT) in KIVA-4. The previous version of KIVA code with conjugate heat transfer code has been developed at Michigan Technological University by Egel Urip and is be used in this project. During the first phase of the project, the difference in the code structure between the previous version of KIVA and the KIVA-4 has been studied, which is the most challenging part of the project. The second phase involves the reverse engineering where the CHT code in previous version is extracted and implemented in KIVA-4 according to the new code structure. The validation of the implemented code is performed using a 4-valve Pentroof engine case. A solid cylinder wall has been developed using GRIDGEN which surrounds 3/4th of the engine cylinder and heat transfer to the solid wall during one engine cycle (0-720 Crank Angle Degree) is compared with that of the reference result. The reference results are nothing but the same engine case run in the previous version with the original code developed by Egel. The results of current code are very much comparable to that of the reference results which verifies that successful implementation of the CHT code in KIVA-4.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
A basic approach to study a NVH problem is to break down the system in three basic elements – source, path and receiver. While the receiver (response) and the transfer path can be measured, it is difficult to measure the source (forces) acting on the system. It becomes necessary to predict these forces to know how they influence the responses. This requires inverting the transfer path. Singular Value Decomposition (SVD) method is used to decompose the transfer path matrix into its principle components which is required for the inversion. The usual approach to force prediction requires rejecting the small singular values obtained during SVD by setting a threshold, as these small values dominate the inverse matrix. This assumption of the threshold may be subjected to rejecting important singular values severely affecting force prediction. The new approach discussed in this report looks at the column space of the transfer path matrix which is the basis for the predicted response. The response participation is an indication of how the small singular values influence the force participation. The ability to accurately reconstruct the response vector is important to establish a confidence in force vector prediction. The goal of this report is to suggest a solution that is mathematically feasible, physically meaningful, and numerically more efficient through examples. This understanding adds new insight to the effects of current code and how to apply algorithms and understanding to new codes.
Resumo:
Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.
Resumo:
We present in this paper several contributions on the collision detection optimization centered on hardware performance. We focus on the broad phase which is the first step of the collision detection process and propose three new ways of parallelization of the well-known Sweep and Prune algorithm. We first developed a multi-core model takes into account the number of available cores. Multi-core architecture enables us to distribute geometric computations with use of multi-threading. Critical writing section and threads idling have been minimized by introducing new data structures for each thread. Programming with directives, like OpenMP, appears to be a good compromise for code portability. We then proposed a new GPU-based algorithm also based on the "Sweep and Prune" that has been adapted to multi-GPU architectures. Our technique is based on a spatial subdivision method used to distribute computations among GPUs. Results show that significant speed-up can be obtained by passing from 1 to 4 GPUs in a large-scale environment.
Resumo:
We tested a set of surface common mid-point (CMP) ground penetrating radar (GPR) surveys combined with elevation rods ( to monitor surface deformation) and gas flux measurements to investigate in-situ biogenic gas dynamics and ebullition events in a northern peatland ( raised bog). The main findings are: ( 1) changes in the two-way travel time from the surface to prominent reflectors allow estimation of average gas contents and evolution of free-phase gas (FPG); ( 2) peat surface deformation and gas flux measurements are strongly consistent with GPR estimated changes in FPG content over time; ( 3) rapid decreases in atmospheric pressure are associated with increased gas flux; and ( 4) single ebullition events can induce releases of methane much larger ( up to 192 g/m(2)) than fluxes reported by others. These results indicate that GPR is a useful tool for assessing the spatial distribution, temporal variation, and volume of biogenic gas deposits in peatlands.
Resumo:
Introduction The aim of this study was to determine which single measurement on post-mortem cardiac MR reflects actual heart weight as measured at autopsy, assess the intra- and inter-observer reliability of MR measurements, derive a formula to predict heart weight from MR measurements and test the accuracy of the formula to prospectively predict heart weight. Materials and methods 53 human cadavers underwent post-mortem cardiac MR and forensic autopsy. In Phase 1, left ventricular area and wall thickness were measured on short axis and four chamber view images of 29 cases. All measurements were correlated to heart weight at autopsy using linear regression analysis. In Phase 2, single left ventricular area measurements on four chamber view images (LVA_4C) from 24 cases were used to predict heart weight at autopsy based on equations derived during Phase 1. Intra-class correlation coefficient (ICC) was used to determine inter- and intra-reader agreement. Results Heart weight strongly correlates with LVA_4C (r=0.78 M; p<0.001). Intra-reader and inter-reader reliability was excellent for LVA_4C (ICC=0.81–0.91; p<0.001 and ICC=0.90; p<0.001 respectively). A simplified formula for heart weight ([g]≈LVA_4C [mm2]×0.11) was derived based on linear regression analysis. Conclusions This study shows that single circumferential area measurements of the left ventricle in the four chamber view on post-mortem cardiac MR reflect actual heart weight as measured at autopsy. These measurements yield an excellent intra- and inter-reader reliability and can be used to predict heart weight prior to autopsy or to give a reasonable estimate of heart weight in cases where autopsy is not performed.
Resumo:
The need for timely population data for health planning and Indicators of need has Increased the demand for population estimates. The data required to produce estimates is difficult to obtain and the process is time consuming. Estimation methods that require less effort and fewer data are needed. The structure preserving estimator (SPREE) is a promising technique not previously used to estimate county population characteristics. This study first uses traditional regression estimation techniques to produce estimates of county population totals. Then the structure preserving estimator, using the results produced in the first phase as constraints, is evaluated.^ Regression methods are among the most frequently used demographic methods for estimating populations. These methods use symptomatic indicators to predict population change. This research evaluates three regression methods to determine which will produce the best estimates based on the 1970 to 1980 indicators of population change. Strategies for stratifying data to improve the ability of the methods to predict change were tested. Difference-correlation using PMSA strata produced the equation which fit the data the best. Regression diagnostics were used to evaluate the residuals.^ The second phase of this study is to evaluate use of the structure preserving estimator in making estimates of population characteristics. The SPREE estimation approach uses existing data (the association structure) to establish the relationship between the variable of interest and the associated variable(s) at the county level. Marginals at the state level (the allocation structure) supply the current relationship between the variables. The full allocation structure model uses current estimates of county population totals to limit the magnitude of county estimates. The limited full allocation structure model has no constraints on county size. The 1970 county census age - gender population provides the association structure, the allocation structure is the 1980 state age - gender distribution.^ The full allocation model produces good estimates of the 1980 county age - gender populations. An unanticipated finding of this research is that the limited full allocation model produces estimates of county population totals that are superior to those produced by the regression methods. The full allocation model is used to produce estimates of 1986 county population characteristics. ^
Resumo:
The Phase I clinical trial is considered the "first in human" study in medical research to examine the toxicity of a new agent. It determines the maximum tolerable dose (MTD) of a new agent, i.e., the highest dose in which toxicity is still acceptable. Several phase I clinical trial designs have been proposed in the past 30 years. The well known standard method, so called the 3+3 design, is widely accepted by clinicians since it is the easiest to implement and it does not need a statistical calculation. Continual reassessment method (CRM), a design uses Bayesian method, has been rising in popularity in the last two decades. Several variants of the CRM design have also been suggested in numerous statistical literatures. Rolling six is a new method introduced in pediatric oncology in 2008, which claims to shorten the trial duration as compared to the 3+3 design. The goal of the present research was to simulate clinical trials and compare these phase I clinical trial designs. Patient population was created by discrete event simulation (DES) method. The characteristics of the patients were generated by several distributions with the parameters derived from a historical phase I clinical trial data review. Patients were then selected and enrolled in clinical trials, each of which uses the 3+3 design, the rolling six, or the CRM design. Five scenarios of dose-toxicity relationship were used to compare the performance of the phase I clinical trial designs. One thousand trials were simulated per phase I clinical trial design per dose-toxicity scenario. The results showed the rolling six design was not superior to the 3+3 design in terms of trial duration. The time to trial completion was comparable between the rolling six and the 3+3 design. However, they both shorten the duration as compared to the two CRM designs. Both CRMs were superior to the 3+3 design and the rolling six in accuracy of MTD estimation. The 3+3 design and rolling six tended to assign more patients to undesired lower dose levels. The toxicities were slightly greater in the CRMs.^
Resumo:
Phase I clinical trial is mainly designed to determine the maximum tolerated dose (MTD) of a new drug. Optimization of phase I trial design is crucial to minimize the number of enrolled patients exposed to unsafe dose levels and to provide reliable information to the later phases of clinical trials. Although it has been criticized about its inefficient MTD estimation, nowadays the traditional 3+3 method remains dominant in practice due to its simplicity and conservative estimation. There are many new designs that have been proven to generate more credible MTD estimation, such as the Continual Reassessment Method (CRM). Despite its accepted better performance, the CRM design is still not widely used in real trials. There are several factors that contribute to the difficulties of CRM adaption in practice. First, CRM is not widely accepted by the regulatory agencies such as FDA in terms of safety. It is considered to be less conservative and tend to expose more patients above the MTD level than the traditional design. Second, CRM is relatively complex and not intuitive for the clinicians to fully understand. Third, the CRM method take much more time and need statistical experts and computer programs throughout the trial. The current situation is that the clinicians still tend to follow the trial process that they are comfortable with. This situation is not likely to change in the near future. Based on this situation, we have the motivation to improve the accuracy of MTD selection while follow the procedure of the traditional design to maintain simplicity. We found that in 3+3 method, the dose transition and the MTD determination are relatively independent. Thus we proposed to separate the two stages. The dose transition rule remained the same as 3+3 method. After getting the toxicity information from the dose transition stage, we combined the isotonic transformation to ensure the monotonic increasing order before selecting the optimal MTD. To compare the operating characteristics of the proposed isotonic method and the other designs, we carried out 10,000 simulation trials under different dose setting scenarios to compare the design characteristics of the isotonic modified method with standard 3+3 method, CRM, biased coin design (BC) and k-in-a-row design (KIAW). The isotonic modified method improved MTD estimation of the standard 3+3 in 39 out of 40 scenarios. The improvement is much greater when the target is 0.3 other than 0.25. The modified design is also competitive when comparing with other selected methods. A CRM method performed better in general but was not as stable as the isotonic method throughout the different dose settings. The results demonstrated that our proposed isotonic modified method is not only easily conducted using the same procedure as 3+3 but also outperforms the conventional 3+3 design. It can also be applied to determine MTD for any given TTL. These features make the isotonic modified method of practical value in phase I clinical trials.^
Resumo:
This study deals with the mineralogical variability of siliceous and zeolitic sediments, porcellanites, and cherts at small intervals in the continuously cored sequence of Deep Sea Drilling Project Site 462. Skeletal opal is preserved down to a maximum burial depth of 390 meters (middle Eocene). Below this level, the tests are totally dissolved or replaced and filled by opal-CT, quartz, clinoptilolite, and calcite. Etching of opaline tests does not increase continously with deeper burial. Opal solution accompanied by a conspicuous formation of authigenic clinoptilolite has a local maximum in Core 16 (150 m). A causal relationship with the lower Miocene hiatus at this level is highly probable. Oligocene to Cenomanian sediments represent an intermediate stage of silica diagenesis: the opal-CT/quartz ratios of the silicified rocks are frequently greater than 1, and quartz filling pores or replacing foraminifer tests is more widespread than quartz which converted from an opal-CT precursor. As at other sites, there is a marked discontinuity of the transitions from biogenic opal via opal-CT to quartz with increasing depth of burial. Layers with unaltered opal-A alternate with porcellanite beds; the intensity of the opal-CT-to-quartz transformation changes very rapidly from horizon to horizon and obviously is not correlated with lithologic parameters. The silica for authigenic clinoptilolite was derived from biogenic opal and decaying volcanic components.