51 resultados para Nonlinear simulations
Resumo:
This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
Rough a global coarse problem. Although these techniques are usually employed for problems in which the fine-scale processes are described by Darcy's law, they can also be applied to pore-scale simulations and used as a mathematical framework for hybrid methods that couples a Darcy and pore scales. In this work, we consider a pore-scale description of fine-scale processes. The Navier-Stokes equations are numerically solved in the pore geometry to compute the velocity field and obtain generalized permeabilities. In the case of two-phase flow, the dynamics of the phase interface is described by the volume of fluid method with the continuum surface force model. The MsFV method is employed to construct an algorithm that couples a Darcy macro-scale description with a pore-scale description at the fine scale. The hybrid simulations results presented are in good agreement with the fine-scale reference solutions. As the reconstruction of the fine-scale details can be done adaptively, the presented method offers a flexible framework for hybrid modeling.
Resumo:
The flow of two immiscible fluids through a porous medium depends on the complex interplay between gravity, capillarity, and viscous forces. The interaction between these forces and the geometry of the medium gives rise to a variety of complex flow regimes that are difficult to describe using continuum models. Although a number of pore-scale models have been employed, a careful investigation of the macroscopic effects of pore-scale processes requires methods based on conservation principles in order to reduce the number of modeling assumptions. In this work we perform direct numerical simulations of drainage by solving Navier-Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and model the transition from stable flow to viscous fingering, we focus on the macroscopic capillary pressure and we compare different definitions of this quantity under quasi-static and dynamic conditions. We show that the difference between the intrinsic phase-average pressures, which is commonly used as definition of Darcy-scale capillary pressure, is subject to several limitations and it is not accurate in presence of viscous effects or trapping. In contrast, a definition based on the variation of the total surface energy provides an accurate estimate of the macroscopic capillary pressure. This definition, which links the capillary pressure to its physical origin, allows a better separation of viscous effects and does not depend on the presence of trapped fluid clusters.
Resumo:
The velocity of a liquid slug falling in a capillary tube is lower than predicted for Poiseuille flow due to presence of menisci, whose shapes are determined by the complex interplay of capillary, viscous, and gravitational forces. Due to the presence of menisci, a capillary pressure proportional to surface curvature acts on the slug and streamlines are bent close to the interface, resulting in enhanced viscous dissipation at the wedges. To determine the origin of drag-force increase relative to Poiseuille flow, we compute the force resultant acting on the slug by integrating Navier-Stokes equations over the liquid volume. Invoking relationships from differential geometry we demonstrate that the additional drag is due to viscous forces only and that no capillary drag of hydrodynamic origin exists (i.e., due to hydrodynamic deformation of the interface). Requiring that the force resultant is zero, we derive scaling laws for the steady velocity in the limit of small capillary numbers by estimating the leading order viscous dissipation in the different regions of the slug (i.e., the unperturbed Poiseuille-like bulk, the static menisci close to the tube axis and the dynamic regions close to the contact lines). Considering both partial and complete wetting, we find that the relationship between dimensionless velocity and weight is, in general, nonlinear. Whereas the relationship obtained for complete-wetting conditions is found in agreement with the experimental data of Bico and Quere [J. Bico and D. Quere, J. Colloid Interface Sci. 243, 262 (2001)], the scaling law under partial-wetting conditions is validated by numerical simulations performed with the Volume of Fluid method. The simulated steady velocities agree with the behavior predicted by the theoretical scaling laws in presence and in absence of static contact angle hysteresis. The numerical simulations suggest that wedge-flow dissipation alone cannot account for the entire additional drag and that the non-Poiseuille dissipation in the static menisci (not considered in previous studies) has to be considered for large contact angles.
Resumo:
There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes an important seismic attenuation mechanism in porous rocks. As a consequence, centimetre-scale perturbations of the rock physical properties should be taken into account for seismic modelling whenever detailed and accurate responses of specific target structures are desired, which is, however, computationally prohibitive. A convenient way to circumvent this problem is to use an upscaling procedure to replace each of the heterogeneous porous media composing the geological model by corresponding equivalent visco-elastic solids and to solve the visco-elastic equations of motion for the inferred equivalent model. While the overall qualitative validity of this procedure is well established, there are as of yet no quantitative analyses regarding the equivalence of the seismograms resulting from the original poro-elastic and the corresponding upscaled visco-elastic models. To address this issue, we compare poro-elastic and visco-elastic solutions for a range of marine-type models of increasing complexity. We found that despite the identical dispersion and attenuation behaviour of the heterogeneous poro-elastic and the equivalent visco-elastic media, the seismograms may differ substantially due to diverging boundary conditions, where there exist additional options for the poro-elastic case. In particular, we observe that at the fluid/porous-solid interface, the poro- and visco-elastic seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an important result which has potentially far-reaching implications for wave-equation-based algorithms in exploration geophysics involving fluid/porous-solid interfaces, such as, for example, wavefield decomposition.
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
We present a spatiotemporal adaptive multiscale algorithm, which is based on the Multiscale Finite Volume method. The algorithm offers a very efficient framework to deal with multiphysics problems and to couple regions with different spatial resolution. We employ the method to simulate two-phase flow through porous media. At the fine scale, we consider a pore-scale description of the flow based on the Volume Of Fluid method. In order to construct a global problem that describes the coarse-scale behavior, the equations are averaged numerically with respect to auxiliary control volumes, and a Darcy-like coarse-scale model is obtained. The space adaptivity is based on the idea that a fine-scale description is only required in the front region, whereas the resolution can be coarsened elsewhere. Temporal adaptivity relies on the fact that the fine-scale and the coarse-scale problems can be solved with different temporal resolution (longer time steps can be used at the coarse scale). By simulating drainage under unstable flow conditions, we show that the method is able to capture the coarse-scale behavior outside the front region and to reproduce complex fluid patterns in the front region.
Resumo:
The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Resumo:
Geophysical tomography captures the spatial distribution of the underlying geophysical property at a relatively high resolution, but the tomographic images tend to be blurred representations of reality and generally fail to reproduce sharp interfaces. Such models may cause significant bias when taken as a basis for predictive flow and transport modeling and are unsuitable for uncertainty assessment. We present a methodology in which tomograms are used to condition multiple-point statistics (MPS) simulations. A large set of geologically reasonable facies realizations and their corresponding synthetically calculated cross-hole radar tomograms are used as a training image. The training image is scanned with a direct sampling algorithm for patterns in the conditioning tomogram, while accounting for the spatially varying resolution of the tomograms. In a post-processing step, only those conditional simulations that predicted the radar traveltimes within the expected data error levels are accepted. The methodology is demonstrated on a two-facies example featuring channels and an aquifer analog of alluvial sedimentary structures with five facies. For both cases, MPS simulations exhibit the sharp interfaces and the geological patterns found in the training image. Compared to unconditioned MPS simulations, the uncertainty in transport predictions is markedly decreased for simulations conditioned to tomograms. As an improvement to other approaches relying on classical smoothness-constrained geophysical tomography, the proposed method allows for: (1) reproduction of sharp interfaces, (2) incorporation of realistic geological constraints and (3) generation of multiple realizations that enables uncertainty assessment.