389 resultados para Finite size scaling
Resumo:
Background Aneurysm expansion rate is an important indicator of the potential risk of abdominal aortic aneurysm (AAA) rupture. Stress within the AAA wall is also thought to be a trigger for its rupture. However, the association between aneurysm wall stresses and expansion of AAA is unclear. Methods and Results Forty-four patients with AAAs were included in this longitudinal follow-up study. They were assessed by serial abdominal ultrasonography and computed tomography scans if a critical size was reached or a rapid expansion occurred. Patient-specific 3-dimensional AAA geometries were reconstructed from the follow-up computed tomography images. Structural analysis was performed to calculate the wall stresses of the AAA models at both baseline and final visit. A nonlinear large-strain finite element method was used to compute the wall-stress distribution. The relationship between wall stresses and expansion rate was investigated. Slowly and rapidly expanding aneurysms had comparable baseline maximum diameters (median, 4.35 cm [interquartile range, 4.12 to 5.0 cm] versus 4.6 cm [interquartile range, 4.2 to 5.0 cm]; P=0.32). Rapidly expanding AAAs had significantly higher shoulder stresses than slowly expanding AAAs (median, 300 kPa [interquartile range, 280 to 320 kPa] versus 225 kPa [interquartile range, 211 to 249 kPa]; P=0.0001). A good correlation between shoulder stress at baseline and expansion rate was found (r=0.71; P=0.0001). Conclusion A higher shoulder stress was found to have an association with a rapidly expanding AAA. Therefore, it may be useful for estimating the expansion of AAAs and improve risk stratification of patients with AAAs.
Resumo:
Growth rate of abdominal aortic aneurysm (AAA) is thought to be an important indicator of the potential risk of rupture. Wall stress is also thought to be a trigger for its rupture. However, stress change during the expansion of an AAA is unclear. Forty-four patients with AAAs were included in this longitudinal follow-up study. They were assessed by serial abdominal ultrasonography and computerized tomography (CT) scans if a critical size was reached or a rapid expansion occurred. Patient-specific 3-dimensional AAA geometries were reconstructed from the follow-up CT images. Structural analysis was performed to calculate the wall stresses of the AAA models at both baseline and final visit. A non-linear large-strain finite element method was used to compute the wall stress distribution. The average growth rate was 0.66cm/year (range 0-1.32 cm/year). A significantly positive correlation between shoulder tress at baseline and growth rate was found (r=0.342; p=0.02). A higher shoulder stress is associated with a rapidly expanding AAA. Therefore, it may be useful for estimating the growth expansion of AAAs and further risk stratification of patients with AAAs.
Resumo:
Objective: The aim of this study was to explore whether there is a relationship between the degree of MR-defined inflammation using ultra small super-paramagnetic iron oxide (USPIO) particles, and biomechanical stress using finite element analysis (FEA) techniques, in carotid atheromatous plaques. Methods and Results: 18 patients with angiographically proven carotid stenoses underwent multi-sequence MR imaging before and 36 h after USPIO infusion. T2 * weighted images were manually segmented into quadrants and the signal change in each quadrant normalised to adjacent muscle was calculated after USPIO administration. Plaque geometry was obtained from the rest of the multi-sequence dataset and used within a FEA model to predict maximal stress concentration within each slice. Subsequently, a new statistical model was developed to explicitly investigate the form of the relationship between biomechanical stress and signal change. The Spearman's rank correlation coefficient for USPIO enhanced signal change and maximal biomechanical stress was -0.60 (p = 0.009). Conclusions: There is an association between biomechanical stress and USPIO enhanced MR-defined inflammation within carotid atheroma, both known risk factors for plaque vulnerability. This underlines the complex interaction between physiological processes and biomechanical mechanisms in the development of carotid atheroma. However, this is preliminary data that will need validation in a larger cohort of patients.
Resumo:
Background Because many acute cerebral ischemic events are caused by rupture of vulnerable carotid atheroma and subsequent thrombosis, the present study used both idealized and patient-specific carotid atheromatous plaque models to evaluate the effect of structural determinants on stress distributions within plaque. Methods and Results Using a finite element method, structural analysis was performed using models derived from in vivo high-resolution magnetic resonance imaging (MRI) of carotid atheroma in 40 non-consecutive patients (20 symptomatic, 20 asymptomatic). Plaque components were modeled as hyper-elastic materials. The effects of varying fibrous cap thickness, lipid core size and lumen curvature on plaque stress distributions were examined. Lumen curvature and fibrous cap thickness were found to be major determinants of plaque stress. The size of the lipid core did not alter plaque stress significantly when the fibrous cap was relatively thick. The correlation between plaque stress and lumen curvature was significant for both symptomatic (p = 0.01; correlation coefficient: 0.689) and asymptomatic patients (p = 0.01; correlation coefficient: 0.862). Lumen curvature in plaques of symptomatic patients was significantly larger than those of asymptomatic patients (1.50±1.0mm-1 vs 1.25±0.75 mm-1; p = 0.01). Conclusion Specific plaque morphology (large lumen curvature and thin fibrous cap) is closely related to plaque vulnerability. Structural analysis using high-resolution MRI of carotid atheroma may help in detecting vulnerable atheromatous plaque and aid the risk stratification of patients with carotid disease.
Resumo:
Object. Individuals with carotid atherosclerosis develop symptoms following rupture of vulnerable plaques. Biomechanical stresses within this plaque may increase vulnerability to rupture. In this report the authors describe the use of in vivo carotid plaque imaging and computational mechanics to document the magnitude and distribution of intrinsic plaque stresses. Methods. Ten (five symptomatic and five asymptomatic) individuals underwent plaque characterization magnetic resonance (MR) imaging. Plaque geometry and composition were determined by multisequence review. Intrinsic plaque stress profiles were generated from 3D meshes by using finite element computational analysis. Differences in principal (shear) stress between normal and diseased sections of the carotid artery and between symptomatic and asymptomatic plaques were noted. Results. There was a significant difference in peak principal stress between diseased and nondiseased segments of the artery (mean difference 537.65 kPa, p < 0.05). Symptomatic plaques had higher mean stresses than asymptomatic plaques (627.6 kPa compared with 370.2 kPa, p = 0.05), which were independent of luminal stenosis and plaque composition. Conclusions. Significant differences in plaque stress exist between plaques from symptomatic individuals and those from asymptomatic individuals. The MR imaging-based computational analysis may therefore be a useful aid to identification of vulnerable plaques in vivo.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Natural mortality of marine invertebrates is often very high in the early life history stages and decreases in later stages. The possible size-dependent mortality of juvenile banana prawns, P. merguiensis (2-15 mm carapace length) in the Gulf of Carpentaria was investigated. The analysis was based on the data collected at 2-weekly intervals by beam trawls at four sites over a period of six years (between September 1986 and March 1992). It was assumed that mortality was a parametric function of size, rather than a constant. Another complication in estimating mortality for juvenile banana prawns is that a significant proportion of the population emigrates from the study area each year. This effect was accounted for by incorporating the size-frequency pattern of the emigrants in the analysis. Both the extra parameter in the model required to describe the size dependence of mortality, and that used to account for emigration were found to be significantly different from zero, and the instantaneous mortality rate declined from 0.89 week(-1) for 2 mm prawns to 0.02 week(-1) for 15 mm prawns.
Resumo:
We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were "rare" in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the "rare" species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the "abundant" species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.