990 resultados para FSI numerical technique
Resumo:
Introduction: The posterior inclination of the tibial component is an important factor that can affect the success of total knee arthroplasty. It can reduce the posterior impingement and thus increase the range of flexion, but it may also induce instability in flexion, anterior impingement between the polyethylene of postero-stabilizing knee prosthesis, and anterior conflict with the cortical bone and the stem. Although the problem is identified, there is still a debate on the ideal inclination angle and the surgical technique to avoid an excessive posterior inclination. The aim of this study was to predict the effect of a posterior inclination of the tibial component on the contact pattern on the tibial insert, using a numerical musculoskeletal model of the knee joint. Methods: A 3D finite element model of the knee joint was developed to simulate an active and loaded squat movement after total knee arthroplasty. Flexion was actively controlled by the quadriceps muscle and muscle activations were estimated from EMG data and were synchronized by a feedback algorithm. Two inclinations of the tibial tray were considered: a posterior inclination of 0° or 10°. During the entire range of flexion, the following quantities were calculated: the tibiofemoral and patello-femoral contact force, and the contact pattern on polyethylene insert. The antero-posterior displacement of the contact pattern was also measured. Abaqus 6.7 was used for all analyses. Results: The tibio-femoral and patello-femoral contact forces increased during flexion and reached respectively 4 and 7 BW (bodyweight) at 90° of flexion. They were slightly affected by the inclination of the tibial tray. Without posterior inclination, the contact pattern on the tibial insert remained centered. The contact pressure was lower than 5 MPa below 60° of flexion, but exceeded 20 MPa at 90° of flexion. The posterior inclination displaced the contact point posteriorly by 2 to 4 mm. Conclusion: The inclination of the tibial tray displaced the contactpattern towards the posterior border of the tibial insert. However, even for 10° of inclination, the contact center remained far from the posterior border (12 mm). There was no instability predicted for this movement.
Resumo:
Contamination of weather radar echoes by anomalous propagation (anaprop) mechanisms remains a serious issue in quality control of radar precipitation estimates. Although significant progress has been made identifying clutter due to anaprop there is no unique method that solves the question of data reliability without removing genuine data. The work described here relates to the development of a software application that uses a numerical weather prediction (NWP) model to obtain the temperature, humidity and pressure fields to calculate the three dimensional structure of the atmospheric refractive index structure, from which a physically based prediction of the incidence of clutter can be made. This technique can be used in conjunction with existing methods for clutter removal by modifying parameters of detectors or filters according to the physical evidence for anomalous propagation conditions. The parabolic equation method (PEM) is a well established technique for solving the equations for beam propagation in a non-uniformly stratified atmosphere, but although intrinsically very efficient, is not sufficiently fast to be practicable for near real-time modelling of clutter over the entire area observed by a typical weather radar. We demonstrate a fast hybrid PEM technique that is capable of providing acceptable results in conjunction with a high-resolution terrain elevation model, using a standard desktop personal computer. We discuss the performance of the method and approaches for the improvement of the model profiles in the lowest levels of the troposphere.
Resumo:
The current operational very short-term and short-term quantitative precipitation forecast (QPF) at the Meteorological Service of Catalonia (SMC) is made by three different methodologies: Advection of the radar reflectivity field (ADV), Identification, tracking and forecasting of convective structures (CST) and numerical weather prediction (NWP) models using observational data assimilation (radar, satellite, etc.). These precipitation forecasts have different characteristics, lead time and spatial resolutions. The objective of this study is to combine these methods in order to obtain a single and optimized QPF at each lead time. This combination (blending) of the radar forecast (ADV and CST) and precipitation forecast from NWP model is carried out by means of different methodologies according to the prediction horizon. Firstly, in order to take advantage of the rainfall location and intensity from radar observations, a phase correction technique is applied to the NWP output to derive an additional corrected forecast (MCO). To select the best precipitation estimation in the first and second hour (t+1 h and t+2 h), the information from radar advection (ADV) and the corrected outputs from the model (MCO) are mixed by using different weights, which vary dynamically, according to indexes that quantify the quality of these predictions. This procedure has the ability to integrate the skill of rainfall location and patterns that are given by the advection of radar reflectivity field with the capacity of generating new precipitation areas from the NWP models. From the third hour (t+3 h), as radar-based forecasting has generally low skills, only the quantitative precipitation forecast from model is used. This blending of different sources of prediction is verified for different types of episodes (convective, moderately convective and stratiform) to obtain a robust methodology for implementing it in an operational and dynamic way.
Resumo:
An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.
Resumo:
We are interested in the development, implementation and testing of an orthotropic model for cardiac contraction based on an active strain decomposition. Our model addresses the coupling of a transversely isotropic mechanical description at the cell level, with an orthotropic constitutive law for incompressible tissue at the macroscopic level. The main differences with the active stress model are addressed in detail, and a finite element discretization using Taylor-Hood and MINI elements is proposed and illustrated with numerical examples.
Resumo:
Developing a novel technique for the efficient, noninvasive clinical evaluation of bone microarchitecture remains both crucial and challenging. The trabecular bone score (TBS) is a new gray-level texture measurement that is applicable to dual-energy X-ray absorptiometry (DXA) images. Significant correlations between TBS and standard 3-dimensional (3D) parameters of bone microarchitecture have been obtained using a numerical simulation approach. The main objective of this study was to empirically evaluate such correlations in anteroposterior spine DXA images. Thirty dried human cadaver vertebrae were evaluated. Micro-computed tomography acquisitions of the bone pieces were obtained at an isotropic resolution of 93μm. Standard parameters of bone microarchitecture were evaluated in a defined region within the vertebral body, excluding cortical bone. The bone pieces were measured on a Prodigy DXA system (GE Medical-Lunar, Madison, WI), using a custom-made positioning device and experimental setup. Significant correlations were detected between TBS and 3D parameters of bone microarchitecture, mostly independent of any correlation between TBS and bone mineral density (BMD). The greatest correlation was between TBS and connectivity density, with TBS explaining roughly 67.2% of the variance. Based on multivariate linear regression modeling, we have established a model to allow for the interpretation of the relationship between TBS and 3D bone microarchitecture parameters. This model indicates that TBS adds greater value and power of differentiation between samples with similar BMDs but different bone microarchitectures. It has been shown that it is possible to estimate bone microarchitecture status derived from DXA imaging using TBS.
Resumo:
L' évaluation quantitative des dangers et des expositions aux nanomatériaux se heurte à de nombreuses incertitudes qui ne seront levées qu'à mesure de la progression des connaissances scientifiques de leurs propriétés. L' une des conséquences de ces incertitudes est que les valeurs limites d'exposition professionnelle définies actuellement pour les poussières ne sont pas nécessairement pertinentes aux nanomatériaux. En l'absence de référentiel quantitatif et, à la demande de la DGS pour éclairer les réflexions de l' AFNOR et de l'ISO sur le sujet, une démarche de gestion graduée des risques (control banding) a été élaborée au sein de l' Anses. Ce développement a été réalisé à l'aide d'un groupe d'experts rapporteurs rattaché au Comité d'experts spécialisés évaluation des risques liés aux agents physiques, aux nouvelles technologies et aux grands aménagements. La mise en oeuvre de la démarche de gestion graduée des risques proposée repose sur quatre grandes étapes: 1. Le recueil des informations. Cette étape consiste à réunir les informations disponibles sur les dangers du nanomatériau manufacturé considéré ; ainsi que sur l'exposition potentielle des personnes aux postes de travail (observation sur le terrain, mesures, etc.). 2. L'attribution d'une bande de danger. Le danger potentiel du nanomatériau manufacturé présent, qu'il soit brut où incorporé dans une matrice (liquide ou solide) est évalué dans cette étape. La bande danger attribuée tient compte de la dangerosité du produit bulk ou de sa substance analogue à l'échelle non-nanométrique, de la bio-persistance du matériau (pour les matériaux fibreux), de sa solubilité et de son éventuelle réactivité. 3. Attribution d'une bande d'exposition. La bande d'exposition du nanomatériau manufacturé considéré ou du produit en contenant est définie par le niveau de potentiel d'émission du produit. Elle tient compte de sa forme physique (solide, liquide, poudre aérosol), de sa pulvérulence et de sa volatilité. Le nombre de travailleurs, la fréquence, la durée d'exposition ainsi que la quantité mise en oeuvre ne sont pas pris en compte, contrairement à une évaluation classique des risques chimiques. 4. Obtention d'une bande de maîtrise des risques. Le croisement des bandes de dangers et d'exposition préalablement attribuées permet de défi nir le niveau de maîtrise du risque. Il fait correspondre les moyens techniques et organisationnels à mettre en oeuvre pour maintenir le risque au niveau le plus faible possible. Un plan d'action est ensuite défi ni pour garantir l'effi cacité de la prévention recommandée par le niveau de maîtrise déterminé. Il tient compte des mesures de prévention déjà existantes et les renforce si nécessaire. Si les mesures indiquées par le niveau de maîtrise de risque ne sont pas réalisables, par exemple, pour des raisons techniques ou budgétaires, une évaluation de risque approfondie devra être réalisée par un expert. La gestion graduée des risques est une méthode alternative pour réaliser une évaluation qualitative de risques et mettre en place des moyens de prévention sans recourir à une évaluation quantitative des risques. Son utilisation semble particulièrement adaptée au contexte des nanomatériaux manufacturés, pour lequel les choix de valeurs de référence (Valeurs limites d'exposition en milieu professionnel) et des techniques de mesurage appropriées souffrent d'une grande incertitude. La démarche proposée repose sur des critères simples, accessibles dans la littérature scientifi que ou via les données techniques relatives aux produits utilisés. Pour autant, sa mise en oeuvre requiert des compétences minimales dans les domaines de la prévention des risques chimiques (chimie, toxicologie, etc.), des nanosciences et des nanotechnologies.
Resumo:
In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.
Resumo:
The impact of charcoal production on soil hydraulic properties, runoff response and erosion susceptibility were studied in both field and simulation experiments. Core and composite samples, from 12 randomly selected sites within the catchment of Kotokosu were taken from the 0-10 cm layer of a charcoal site soil (CSS) and adjacent field soils (AFS). These samples were used to determine saturated hydraulic conductivity (Ksat), bulk density, total porosity, soil texture and color. Infiltration, surface albedo and soil surface temperature were also measured in both CSS and AFS. Measured properties were used as entries in a rainfall runoff simulation experiment on a smooth (5 % slope) plot of 25 x 25 m grids with 10 cm resolutions. Typical rainfall intensities of the study watershed (high, moderate and low) were applied to five different combinations of Ks distributions that could be expected in this landscape. The results showed significantly (p < 0.01) higher flow characteristics of the soil under charcoal kilns (increase of 88 %). Infiltration was enhanced and runoff volume reduced significantly. The results showed runoff reduction of about 37 and 18 %, and runoff coefficient ranging from 0.47-0.75 and 0.04-0.39 or simulation based on high (200 mm h-1) and moderate (100 mm h-1) rainfall events over the CSS and AFS areas, respectively. Other potential impacts of charcoal production on watershed hydrology were described. The results presented, together with watershed measurements, when available, are expected to enhance understanding of the hydrological responses of ecosystems to indiscriminate charcoal production and related activities in this region.
Resumo:
Cerebral blood flow can be studied in a multislice mode with a recently proposed perfusion sequence using inversion of water spins as an endogenous tracer without magnetization transfer artifacts. The magnetization transfer insensitive labeling technique (TILT) has been used for mapping blood flow changes at a microvascular level under motor activation in a multislice mode. In TILT, perfusion mapping is achieved by subtraction of a perfusion-sensitized image from a control image. Perfusion weighting is accomplished by proximal blood labeling using two 90 degrees radiofrequency excitation pulses. For control preparation the labeling pulses are modified such that they have no net effect on blood water magnetization. The percentage of blood flow change, as well as its spatial extent, has been studied in single and multislice modes with varying delays between labeling and imaging. The average perfusion signal change due to activation was 36.9 +/- 9.1% in the single-slice experiments and 38.1 +/- 7.9% in the multislice experiments. The volume of activated brain areas amounted to 1.51 +/- 0.95 cm3 in the contralateral primary motor (M1) area, 0.90 +/- 0.72 cc in the ipsilateral M1 area, 1.27 +/- 0.39 cm3 in the contralateral and 1.42 +/- 0.75 cm3 in the ipsilateral premotor areas, and 0.71 +/- 0.19 cm3 in the supplementary motor area.
Resumo:
Selostus: Mahdollisuus lyhytaikaisen virtsankeruun käyttöön lypsylehmien virtsan pseudouridiinin erityksen määrittämisessä
Resumo:
A novel laboratory technique is proposed to investigate wave-induced fluid flow on the mesoscopic scale as a mechanism for seismic attenuation in partially saturated rocks. This technique combines measurements of seismic attenuation in the frequency range from 1 to 100?Hz with measurements of transient fluid pressure as a response of a step stress applied on top of the sample. We used a Berea sandstone sample partially saturated with water. The laboratory results suggest that wave-induced fluid flow on the mesoscopic scale is dominant in partially saturated samples. A 3-D numerical model representing the sample was used to verify the experimental results. Biot's equations of consolidation were solved with the finite-element method. Wave-induced fluid flow on the mesoscopic scale was the only attenuation mechanism accounted for in the numerical solution. The numerically calculated transient fluid pressure reproduced the laboratory data. Moreover, the numerically calculated attenuation, superposed to the frequency-independent matrix anelasticity, reproduced the attenuation measured in the laboratory in the partially saturated sample. This experimental?numerical fit demonstrates that wave-induced fluid flow on the mesoscopic scale and matrix anelasticity are the dominant mechanisms for seismic attenuation in partially saturated Berea sandstone.