956 resultados para finite difference methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES To compare biomechanical rupture risk parameters of asymptomatic, symptomatic and ruptured abdominal aortic aneurysms (AAA) using finite element analysis (FEA). STUDY DESIGN Retrospective biomechanical single center analysis of asymptomatic, symptomatic, and ruptured AAAs. Comparison of biomechanical parameters from FEA. MATERIALS AND METHODS From 2011 to 2013 computed tomography angiography (CTA) data from 30 asymptomatic, 15 symptomatic, and 15 ruptured AAAs were collected consecutively. FEA was performed according to the successive steps of AAA vessel reconstruction, segmentation and finite element computation. Biomechanical parameters Peak Wall Rupture Risk Index (PWRI), Peak Wall Stress (PWS), and Rupture Risk Equivalent Diameter (RRED) were compared among the three subgroups. RESULTS PWRI differentiated between asymptomatic and symptomatic AAAs (p < .0004) better than PWS (p < .1453). PWRI-dependent RRED was higher in the symptomatic subgroup compared with the asymptomatic subgroup (p < .0004). Maximum AAA external diameters were comparable between the two groups (p < .1355). Ruptured AAAs showed the highest values for external diameter, total intraluminal thrombus volume, PWS, RRED, and PWRI compared with asymptomatic and symptomatic AAAs. In contrast with symptomatic and ruptured AAAs, none of the asymptomatic patients had a PWRI value >1.0. This threshold value might identify patients at imminent risk of rupture. CONCLUSIONS From different FEA derived parameters, PWRI distinguishes most precisely between asymptomatic and symptomatic AAAs. If elevated, this value may represent a negative prognostic factor for asymptomatic AAAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE To develop a method for computing and visualizing pressure differences derived from time-resolved velocity-encoded three-dimensional phase-contrast magnetic resonance imaging (4D flow MRI) and to compare pressure difference maps of patients with unrepaired and repaired aortic coarctation to young healthy volunteers. METHODS 4D flow MRI data of four patients with aortic coarctation either before or after repair (mean age 17 years, age range 3-28, one female, three males) and four young healthy volunteers without history of cardiovascular disease (mean age 24 years, age range 20-27, one female, three males) was acquired using a 1.5-T clinical MR scanner. Image analysis was performed with in-house developed image processing software. Relative pressures were computed based on the Navier-Stokes equation. RESULTS A standardized method for intuitive visualization of pressure difference maps was developed and successfully applied to all included patients and volunteers. Young healthy volunteers exhibited smooth and regular distribution of relative pressures in the thoracic aorta at mid systole with very similar distribution in all analyzed volunteers. Patients demonstrated disturbed pressures compared to volunteers. Changes included a pressure drop at the aortic isthmus in all patients, increased relative pressures in the aortic arch in patients with residual narrowing after repair, and increased relative pressures in the descending aorta in a patient after patch aortoplasty. CONCLUSIONS Pressure difference maps derived from 4D flow MRI can depict alterations of spatial pressure distribution in patients with repaired and unrepaired aortic coarctation. The technique might allow identifying pathophysiological conditions underlying complications after aortic coarctation repair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop an adaptive procedure for the numerical solution of general, semilinear elliptic problems with possible singular perturbations. Our approach combines both prediction-type adaptive Newton methods and a linear adaptive finite element discretization (based on a robust a posteriori error analysis), thereby leading to a fully adaptive Newton–Galerkin scheme. Numerical experiments underline the robustness and reliability of the proposed approach for various examples

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nitinol stent oversizing is frequently performed in peripheral arteries to ensure a desirable lumen gain. However, the clinical effect of mis-sizing remains controversial. The goal of this study was to provide a better understanding of the structural and hemodynamic effects of Nitinol stent oversizing. Five patient-specific numerical models of non-calcified popliteal arteries were developed to simulate the deployment of Nitinol stents with oversizing ratios ranging from 1.1 to 1.8. In addition to arterial biomechanics, computational fluid dynamics methods were adopted to simulate the physiological blood flow inside the stented arteries. Results showed that stent oversizing led to a limited increase in the acute lumen gain, albeit at the cost of a significant increase in arterial wall stresses. Furthermore, localized areas affected by low Wall Shear Stress increased with higher oversizing ratios. Stents were also negatively impacted by the procedure as their fatigue safety factors gradually decreased with oversizing. These adverse effects to both the artery walls and stents may create circumstances for restenosis. Although the ideal oversizing ratio is stent-specific, this study showed that Nitinol stent oversizing has a very small impact on the immediate lumen gain, which contradicts the clinical motivations of the procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several tests for the comparison of different groups in the randomized complete block design exist. However, there is a lack of robust estimators for the location difference between one group and all the others on the original scale. The relative marginal effects are commonly used in this situation, but they are more difficult to interpret and use by less experienced people because of the different scale. In this paper two nonparametric estimators for the comparison of one group against the others in the randomized complete block design will be presented. Theoretical results such as asymptotic normality, consistency, translation invariance, scale preservation, unbiasedness, and median unbiasedness are derived. The finite sample behavior of these estimators is derived by simulations of different scenarios. In addition, possible confidence intervals with these estimators are discussed and their behavior derived also by simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most studies of differential gene-expressions have been conducted between two given conditions. The two-condition experimental (TCE) approach is simple in that all genes detected display a common differential expression pattern responsive to a common two-condition difference. Therefore, the genes that are differentially expressed under the other conditions other than the given two conditions are undetectable with the TCE approach. In order to address the problem, we propose a new approach called multiple-condition experiment (MCE) without replication and develop corresponding statistical methods including inference of pairs of conditions for genes, new t-statistics, and a generalized multiple-testing method for any multiple-testing procedure via a control parameter C. We applied these statistical methods to analyze our real MCE data from breast cancer cell lines and found that 85 percent of gene-expression variations were caused by genotypic effects and genotype-ANAX1 overexpression interactions, which agrees well with our expected results. We also applied our methods to the adenoma dataset of Notterman et al. and identified 93 differentially expressed genes that could not be found in TCE. The MCE approach is a conceptual breakthrough in many aspects: (a) many conditions of interests can be conducted simultaneously; (b) study of association between differential expressions of genes and conditions becomes easy; (c) it can provide more precise information for molecular classification and diagnosis of tumors; (d) it can save lot of experimental resources and time for investigators.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four species of planktic foraminifera from core-tops spanning a depth transect on the Ontong Java Plateau were prepared for Mg/Ca analysis both with (Cd-cleaning) and without (Mg-cleaning) a reductive cleaning step. Reductive cleaning caused etching of foraminiferal calcite, focused on Mg-rich inner calcite, even on tests which had already been partially dissolved at the seafloor. Despite corrosion, there was no difference in Mg/Ca of Pulleniatina obliquiloculata between cleaning methods. Reductive cleaning decreased Mg/Ca by an average (all depths) of ~ 4% for Globigerinoides ruber white and ~ 10% for Neogloboquadrina dutertrei. Mg/Ca of Globigerinoides sacculifer (above the calcite saturation horizon only) was 5% lower after reductive cleaning. The decrease in Mg/Ca due to reductive cleaning appeared insensitive to preservation state for G. ruber, N. dutertrei and P. obliquiloculata. Mg/Ca of Cd-cleaned G. sacculifer appeared less sensitive to dissolution than that of Mg-cleaned. Mg-cleaning is adequate, but SEM and contaminants (Al/Ca, Fe/Ca and Mn/Ca) show that Cd-cleaning is more effective for porous species. A second aspect of the study addressed sample loss during cleaning. Lower yield after Cd-cleaning for G. ruber, G. sacculifer and N. dutertrei confirmed this to be the more aggressive method. Strongest correlations between yield and Delta[CO3^2-] in core-top samples were for Cd-cleaned G. ruber (r = 0.88, p = 0.020) and Cd-cleaned P. obliquiloculata (r = 0.68, p = 0.030). In a down-core record (WIND28K) correlation, r, between yield values > 30% and dissolution index, XDX, was -0.61 (p = 0.002). Where cleaning yield < 30% most Mg-cleaned Mg/Ca values were biased by dissolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the success achieved in previous research projects usin non-destructive methods to estimate the physical and mechanical aging of particle and fibre boards, this paper studies the relationships between aging, physical and mechanical changes, using non-destructive measurements of oriented strand board (OSB). 184 pieces of OSB board from a French source were tested to analyze its actual physical and mechanical properties. The same properties were estimated using acoustic non-destructive methods (ultrasound and stress wave velocity) during a physical laboratory aging test. Measurements were recorded of propagation wave velocity with the sensors aligned, edge to edge, and forming an angle of 45 degrees, with both sensors on the same face of the board. This is because aligned measures are not possible on site. The velocity results are always higher in 45 degree measurements. Given the results of statistical analysis, it can be concluded that there is a strong relationship between acoustic measurements and the decline in physical and mechanical properties of the panels due to aging. The authors propose several models to estimate the physical and mechanical properties of board, as well as their degree of aging. The best results are obtained using ultrasound, although the difference in comparison with the stress wave method is not very significant. A reliable prediction of the degree of deterioration (aging) of board is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When an automobile passes over a bridge dynamic effects are produced in vehicle and structure. In addition, the bridge itself moves when exposed to the wind inducing dynamic effects on the vehicle that have to be considered. The main objective of this work is to understand the influence of the different parameters concerning the vehicle, the bridge, the road roughness or the wind in the comfort and safety of the vehicles when crossing bridges. Non linear finite element models are used for structures and multibody dynamic models are employed for vehicles. The interaction between the vehicle and the bridge is considered by contact methods. Road roughness is described by the power spectral density (PSD) proposed by the ISO 8608. To consider that the profiles under right and left wheels are different but not independent, the hypotheses of homogeneity and isotropy are assumed. To generate the wind velocity history along the road the Sandia method is employed. The global problem is solved by means of the finite element method. First the methodology for modelling the interaction is verified in a benchmark. Following, the case of a vehicle running along a rigid road and subjected to the action of the turbulent wind is analyzed and the road roughness is incorporated in a following step. Finally the flexibility of the bridge is added to the model by making the vehicle run over the structure. The application of this methodology will allow to understand the influence of the different parameters in the comfort and safety of road vehicles crossing wind exposed bridges. Those results will help to recommend measures to make the traffic over bridges more reliable without affecting the structural integrity of the viaduct

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to study the mechanisms of instability that occur in swept wings when the angle of attack increases. For this, a simplified model for the a simplified model for the non-orthogonal swept leading edge boundary layer has been used as well as different numerical techniques in order to solve the linear stability problem that describes the behavior of perturbations superposed upon this base flow. Two different approaches, matrix-free and matrix forming methods, have been validated using direct numerical simulations with spectral resolution. In this way, flow instability in the non-orthogonal swept attachment-line boundary layer is addressed in a linear analysis framework via the solution of the pertinent global (Bi-Global) PDE-based eigenvalue problem. Subsequently, a simple extension of the extended G¨ortler-H¨ammerlin ODEbased polynomial model proposed by Theofilis, Fedorov, Obrist & Dallmann (2003) for orthogonal flow, which includes previous models as particular cases and recovers global instability analysis results, is presented for non-orthogonal flow. Direct numerical simulations have been used to verify the stability results and unravel the limits of validity of the basic flow model analyzed. The effect of the angle of attack, AoA, on the critical conditions of the non-orthogonal problem has been documented; an increase of the angle of attack, from AoA = 0 (orthogonal flow) up to values close to _/2 which make the assumptions under which the basic flow is derived questionable, is found to systematically destabilize the flow. The critical conditions of non-orthogonal flows at 0 _ AoA _ _/2 are shown to be recoverable from those of orthogonal flow, via a simple analytical transformation involving AoA. These results can help to understand the mechanisms of destabilization that occurs in the attachment line of wings at finite angles of attack. Studies taking into account variations of the pressure field in the basic flow or the extension to compressible flows are issues that remain open. El objetivo de esta tesis es estudiar los mecanismos de la inestabilidad que se producen en ciertos dispositivos aerodinámicos cuando se aumenta el ángulo de ataque. Para ello se ha utilizado un modelo simplificado del flujo de base, así como diferentes técnicas numéricas, con el fin de resolver el problema de estabilidad lineal asociado que describe el comportamiento de las perturbaciones. Estos métodos; sin y con formación de matriz, se han validado utilizando simulaciones numéricas directas con resolución espectral. De esta manera, la inestabilidad del flujo de capa límite laminar oblicuo entorno a la línea de estancamiento se aborda en un marco de análisis lineal por medio del método Bi-Global de resolución del problema de valores propios en derivadas parciales. Posteriormente se propone una extensión simple para el flujo no-ortogonal del modelo polinomial de ecuaciones diferenciales ordinarias, G¨ortler-H¨ammerlin extendido, propuesto por Theofilis et al. (2003) para el flujo ortogonal, que incluye los modelos previos como casos particulares y recupera los resultados del analisis global de estabilidad lineal. Se han realizado simulaciones directas con el fin de verificar los resultados del análisis de estabilidad así como para investigar los límites de validez del modelo de flujo base utilizado. En este trabajo se ha documentado el efecto del ángulo de ataque AoA en las condiciones críticas del problema no ortogonal obteniendo que el incremento del ángulo de ataque, de AoA = 0 (flujo ortogonal) hasta valores próximos a _/2, en el cual las hipótesis sobre las que se basa el flujo base dejan de ser válidas, tiende sistemáticamente a desestabilizar el flujo. Las condiciones críticas del caso no ortogonal 0 _ AoA _ _/2 pueden recuperarse a partir del caso ortogonal mediante el uso de una transformación analítica simple que implica el ángulo de ataque AoA. Estos resultados pueden ayudar a comprender los mecanismos de desestabilización que se producen en el borde de ataque de las alas de los aviones a ángulos de ataque finitos. Como tareas pendientes quedaría realizar estudios que tengan en cuenta variaciones del campo de presión en el flujo base así como la extensión de éste al caso de flujos compresibles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical formulation for finite strain elasto plastic consolidation of fully saturated soil media is presented. Strong and weak forms of the boundary-value problem are derived using both the material and spatial descriptions. The algorithmic treatment of finite strain elastoplasticity for the solid phase is based on multiplicative decomposition and is coupled with the algorithm for fluid flow via the Kirchhoff pore water pressure. Balance laws are written for the soil-water mixture following the motion of the soil matrix alone. It is shown that the motion of the fluid phase only affects the Jacobian of the solid phase motion, and therefore can be characterized completely by the motion of the soil matrix. Furthermore, it is shown from energy balance consideration that the effective, or intergranular, stress is the appropriate measure of stress for describing the constitutive response of the soil skeleton since it absorbs all the strain energy generated in the saturated soil-water mixture. Finally, it is shown that the mathematical model is amenable to consistent linearization, and that explicit expressions for the consistent tangent operators can be derived for use in numerical solutions such as those based on the finite element method.