899 resultados para Simulation study


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: Angiographic ectasias and aneurysms in stented segments have been associated with late stent thrombosis. Using optical coherence tomography (OCT), some stented segments show coronary evaginations reminiscent of ectasias. The purpose of this study was to explore, using computational fluid-dynamic (CFD) simulations, whether OCT-detected coronary evaginations can induce local changes in blood flow. Methods and results: OCT-detected evaginations are defined as outward bulges in the luminal vessel contour between struts, with the depth of the bulge exceeding the actual strut thickness. Evaginations can be characterised cross ectionally by depth and along the stented segment by total length. Assuming an ellipsoid shape, we modelled 3-D evaginations with different sizes by varying the depth from 0.2-1.0 mm, and the length from 1-9 mm. For the flow simulation we used average flow velocity data from non-diseased coronary arteries. The change in flow with varying evagination sizes was assessed using a particle tracing test where the particle transit time within the segment with evagination was compared with that of a control vessel. The presence of the evagination caused a delayed particle transit time which increased with the evagination size. The change in flow consisted locally of recirculation within the evagination, as well as flow deceleration due to a larger lumen - seen as a deflection of flow towards the evagination. Conclusions: CFD simulation of 3-D evaginations and blood flow suggests that evaginations affect flow locally, with a flow disturbance that increases with increasing evagination size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Besides its primary role in producing food and fiber, agriculture also has relevant effects on several other functions, such as management of renewable natural resources. Climate change (CC) may lead to new trade-offs between agricultural functions or aggravate existing ones, but suitable agricultural management may maintain or even improve the ability of agroecosystems to supply these functions. Hence, it is necessary to identify relevant drivers (e.g., cropping practices, local conditions) and their interactions, and how they affect agricultural functions in a changing climate. The goal of this study was to use a modeling framework to analyze the sensitivity of indicators of three important agricultural functions, namely crop yield (food and fiber production function), soil erosion (soil conservation function), and nutrient leaching (clean water provision function), to a wide range of agricultural practices for current and future climate conditions. In a two-step approach, cropping practices that explain high proportions of variance of the different indicators were first identified by an analysis of variance-based sensitivity analysis. Then, most suitable combinations of practices to achieve best performance with respect to each indicator were extracted, and trade-offs were analyzed. The procedure was applied to a region in western Switzerland, considering two different soil types to test the importance of local environmental constraints. Results show that the sensitivity of crop yield and soil erosion due to management is high, while nutrient leaching mostly depends on soil type. We found that the influence of most agricultural practices does not change significantly with CC; only irrigation becomes more relevant as a consequence of decreasing summer rainfall. Trade-offs were identified when focusing on best performances of each indicator separately, and these were amplified under CC. For adaptation to CC in the selected study region, conservation soil management and the use of cropped grasslands appear to be the most suitable options to avoid trade-offs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to explore potential causes and mechanisms for the sequence and temporal pattern of tree taxa, specifically for the shift from shrub-tundra to birch–juniper woodland during and after the transition from the Oldest Dryas to the Bølling–Allerød in the region surrounding the lake Gerzensee in southern Central Europe. We tested the influence of climate, forest dynamics, community dynamics compared to other causes for delays. For this aim temperature reconstructed from a δ18O-record was used as input driving the multi-species forest-landscape model TreeMig. In a stepwise scenario analysis, population dynamics along with pollen production and transport were simulated and compared with pollen-influx data, according to scenarios of different δ18O/temperature sensitivities, different precipitation levels, with/without inter-specific competition, and with/without prescribed arrival of species. In the best-fitting scenarios, the effects on competitive relationships, pollen production, spatial forest structure, albedo, and surface roughness were examined in more detail. The appearance of most taxa in the data could only be explained by the coldest temperature scenario with a sensitivity of 0.3‰/°C, corresponding to an anomaly of − 15 °C. Once the taxa were present, their temporal pattern was shaped by competition. The later arrival of Pinus could not be explained even by the coldest temperatures, and its timing had to be prescribed by first observations in the pollen record. After the arrival into the simulation area, the expansion of Pinus was further influenced by competitors and minor climate oscillations. The rapid change in the simulated species composition went along with a drastic change in forest structure, leaf area, albedo, and surface roughness. Pollen increased only shortly after biomass. Based on our simulations, two alternative potential scenarios for the pollen pattern can be given: either very cold climate suppressed most species in the Oldest Dryas, or they were delayed by soil formation or migration. One taxon, Pinus, was delayed by migration and then additionally hindered by competition. Community dynamics affected the pattern in two ways: potentially by facilitation, i.e. by nitrogen-fixing pioneer species at the onset, whereas the later pattern was clearly shaped by competition. The simulated structural changes illustrate how vegetation on a larger scale could feed back to the climate system. For a better understanding, a more integrated simulation approach covering also the immigration from refugia would be necessary, for this combines climate-driven population dynamics, migration, individual pollen production and transport, soil dynamics, and physiology of individual pollen production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Efficiently performed basic life support (BLS) after cardiac arrest is proven to be effective. However, cardiopulmonary resuscitation (CPR) is strenuous and rescuers' performance declines rapidly over time. Audio-visual feedback devices reporting CPR quality may prevent this decline. We aimed to investigate the effect of various CPR feedback devices on CPR quality. METHODS In this open, prospective, randomised, controlled trial we compared three CPR feedback devices (PocketCPR, CPRmeter, iPhone app PocketCPR) with standard BLS without feedback in a simulated scenario. 240 trained medical students performed single rescuer BLS on a manikin for 8min. Effective compression (compressions with correct depth, pressure point and sufficient decompression) as well as compression rate, flow time fraction and ventilation parameters were compared between the four groups. RESULTS Study participants using the PocketCPR performed 17±19% effective compressions compared to 32±28% with CPRmeter, 25±27% with the iPhone app PocketCPR, and 35±30% applying standard BLS (PocketCPR vs. CPRmeter p=0.007, PocketCPR vs. standard BLS p=0.001, others: ns). PocketCPR and CPRmeter prevented a decline in effective compression over time, but overall performance in the PocketCPR group was considerably inferior to standard BLS. Compression depth and rate were within the range recommended in the guidelines in all groups. CONCLUSION While we found differences between the investigated CPR feedback devices, overall BLS quality was suboptimal in all groups. Surprisingly, effective compression was not improved by any CPR feedback device compared to standard BLS. All feedback devices caused substantial delay in starting CPR, which may worsen outcome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed an empirically based simulation system to create images equivalent in SNR and SPR to those that would be acquired with various possible SEDR configurations. This system uses a collection of spot collimated full-field images (SCFFIs) of an anthropomorphic chest phantom, taken at high exposure levels and rescaled in noise and intensity, then digitally collimated and combined to produce the simulated SEDR images. This system allows for the study of design trade-offs between different equalization feedback schemes and scatter rejection geometries in addition to estimating the clinical benefits of SEDR over traditional imaging techniques. Data from this simulation system has demonstrated that SEDR techniques offer potential significant improvements over currently used digital radiography techniques for chest imaging. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An interim analysis is usually applied in later phase II or phase III trials to find convincing evidence of a significant treatment difference that may lead to trial termination at an earlier point than planned at the beginning. This can result in the saving of patient resources and shortening of drug development and approval time. In addition, ethics and economics are also the reasons to stop a trial earlier. In clinical trials of eyes, ears, knees, arms, kidneys, lungs, and other clustered treatments, data may include distribution-free random variables with matched and unmatched subjects in one study. It is important to properly include both subjects in the interim and the final analyses so that the maximum efficiency of statistical and clinical inferences can be obtained at different stages of the trials. So far, no publication has applied a statistical method for distribution-free data with matched and unmatched subjects in the interim analysis of clinical trials. In this simulation study, the hybrid statistic was used to estimate the empirical powers and the empirical type I errors among the simulated datasets with different sample sizes, different effect sizes, different correlation coefficients for matched pairs, and different data distributions, respectively, in the interim and final analysis with 4 different group sequential methods. Empirical powers and empirical type I errors were also compared to those estimated by using the meta-analysis t-test among the same simulated datasets. Results from this simulation study show that, compared to the meta-analysis t-test commonly used for data with normally distributed observations, the hybrid statistic has a greater power for data observed from normally, log-normally, and multinomially distributed random variables with matched and unmatched subjects and with outliers. Powers rose with the increase in sample size, effect size, and correlation coefficient for the matched pairs. In addition, lower type I errors were observed estimated by using the hybrid statistic, which indicates that this test is also conservative for data with outliers in the interim analysis of clinical trials.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Phase I clinical trial is considered the "first in human" study in medical research to examine the toxicity of a new agent. It determines the maximum tolerable dose (MTD) of a new agent, i.e., the highest dose in which toxicity is still acceptable. Several phase I clinical trial designs have been proposed in the past 30 years. The well known standard method, so called the 3+3 design, is widely accepted by clinicians since it is the easiest to implement and it does not need a statistical calculation. Continual reassessment method (CRM), a design uses Bayesian method, has been rising in popularity in the last two decades. Several variants of the CRM design have also been suggested in numerous statistical literatures. Rolling six is a new method introduced in pediatric oncology in 2008, which claims to shorten the trial duration as compared to the 3+3 design. The goal of the present research was to simulate clinical trials and compare these phase I clinical trial designs. Patient population was created by discrete event simulation (DES) method. The characteristics of the patients were generated by several distributions with the parameters derived from a historical phase I clinical trial data review. Patients were then selected and enrolled in clinical trials, each of which uses the 3+3 design, the rolling six, or the CRM design. Five scenarios of dose-toxicity relationship were used to compare the performance of the phase I clinical trial designs. One thousand trials were simulated per phase I clinical trial design per dose-toxicity scenario. The results showed the rolling six design was not superior to the 3+3 design in terms of trial duration. The time to trial completion was comparable between the rolling six and the 3+3 design. However, they both shorten the duration as compared to the two CRM designs. Both CRMs were superior to the 3+3 design and the rolling six in accuracy of MTD estimation. The 3+3 design and rolling six tended to assign more patients to undesired lower dose levels. The toxicities were slightly greater in the CRMs.^