958 resultados para stochastic numerical methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: To investigate the acute effects of stochastic resonance whole body vibration (SR-WBV) training to identify possible explanations for preventive effects against musculoskeletal disorders. METHODS: Twenty-three healthy, female students participated in this quasi-experimental pilot study. Acute physiological and psychological effects of SR-WBV training were examined using electromyography of descending trapezius (TD) muscle, heart rate variability (HRV), different skin parameters (temperature, redness and blood flow) and self-report questionnaires. All subjects conducted a sham SR-WBV training at a low intensity (2 Hz with noise level 0) and a verum SR-WBV training at a higher intensity (6 Hz with noise level 4). They were tested before, during and after the training. Conclusions were drawn on the basis of analysis of variance. RESULTS: Twenty-three healthy, female students participated in this study (age = 22.4 ± 2.1 years; body mass index = 21.6 ± 2.2 kg/m2). Muscular activity of the TD and energy expenditure rose during verum SR-WBV compared to baseline and sham SR-WBV (all P < 0.05). Muscular relaxation after verum SR-WBV was higher than at baseline and after sham SR-WBV (all P < 0.05). During verum SR-WBV the levels of HRV were similar to those observed during sham SR-WBV. The same applies for most of the skin characteristics, while microcirculation of the skin of the middle back was higher during verum compared to sham SR-WBV (P < 0.001). Skin redness showed significant changes over the three measurement points only in the middle back area (P = 0.022). There was a significant rise from baseline to verum SR-WBV (0.86 ± 0.25 perfusion units; P = 0.008). The self-reported chronic pain grade indicators of pain, stiffness, well-being, and muscle relaxation showed a mixed pattern across conditions. Muscle and joint stiffness (P = 0.018) and muscular relaxation did significantly change from baseline to different conditions of SR-WBV (P < 0.001). Moreover, muscle relaxation after verum SR-WBV was higher than after sham SR-WBV (P < 0.05). CONCLUSION: Verum SR-WBV stimulated musculoskeletal activity in young healthy individuals while cardiovascular activation was low. Training of musculoskeletal capacity and immediate increase in musculoskeletal relaxation are potential mediators of pain reduction in preventive trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calcium levels in spines play a significant role in determining the sign and magnitude of synaptic plasticity. The magnitude of calcium influx into spines is highly dependent on influx through N-methyl D-aspartate (NMDA) receptors, and therefore depends on the number of postsynaptic NMDA receptors in each spine. We have calculated previously how the number of postsynaptic NMDA receptors determines the mean and variance of calcium transients in the postsynaptic density, and how this alters the shape of plasticity curves. However, the number of postsynaptic NMDA receptors in the postsynaptic density is not well known. Anatomical methods for estimating the number of NMDA receptors produce estimates that are very different than those produced by physiological techniques. The physiological techniques are based on the statistics of synaptic transmission and it is difficult to experimentally estimate their precision. In this paper we use stochastic simulations in order to test the validity of a physiological estimation technique based on failure analysis. We find that the method is likely to underestimate the number of postsynaptic NMDA receptors, explain the source of the error, and re-derive a more precise estimation technique. We also show that the original failure analysis as well as our improved formulas are not robust to small estimation errors in key parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Stochastic resonance whole body vibrations (SR-WBV) may reduce and prevent musculoskeletal problems (MSP). The aim of this study was to evaluate how activities of the lumbar erector spinae (ES) and of the ascending and descending trapezius (TA, TD) change in upright standing position during SR-WBV. METHODS: Nineteen female subjects completed 12 series of 10 seconds of SR-WBV at six different frequencies (2, 4, 6, 8, 10, 12Hz) and two types of "noise"-applications. An assessment at rest had been executed beforehand. Muscle activities were measured with EMG and normalized to the maximum voluntary contraction (MVC%). For statistical testing a three-factorial analysis of variation (ANOVA) was applied. RESULTS: The maximum activity of the respective muscles was 14.5 MVC% for the ES, 4.6 MVC% for the TA (12Hz with "noise" both), and 7.4 MVC% for the TD (10Hz without "noise"). Furthermore, all muscles varied significantly at 6Hz and above (p⋜0.047) compared to the situation at rest. No significant differences were found at SR-WBV with or without "noise". CONCLUSIONS: In general, muscle activity during SR-WBV is reasonably low and comparable to core strength stability exercises, sensorimotor training and "abdominal hollowing" in water. SR-WBV may be a therapeutic option for the relief of MSP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The numerical simulations of the magnetic properties of extended three-dimensional networks containing M(II) ions with an S = 5/2 ground-state spin have been carried out within the framework of the isotropic Heisenberg model. Analytical expressions fitting the numerical simulations for the primitive cubic, diamond, together with (10−3) cubic networks have all been derived. With these empirical formulas in hands, we can now extract the interaction between the magnetic ions from the experimental data for these networks. In the case of the primitive cubic network, these expressions are directly compared with those from the high-temperature expansions of the partition function. A fit of the experimental data for three complexes, namely [(N(CH3)4][Mn(N3)] 1, [Mn(CN4)]n 2, and [FeII(bipy)3][MnII2(ox)3] 3, has been carried out. The best fits were those obtained using the following parameters, J = −3.5 cm-1, g = 2.01 (1); J = −8.3 cm-1, g = 1.95 (2); and J = −2.0 cm-1, g = 1.95 (3).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop an adaptive procedure for the numerical solution of general, semilinear elliptic problems with possible singular perturbations. Our approach combines both prediction-type adaptive Newton methods and a linear adaptive finite element discretization (based on a robust a posteriori error analysis), thereby leading to a fully adaptive Newton–Galerkin scheme. Numerical experiments underline the robustness and reliability of the proposed approach for various examples

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article centers on the computational performance of the continuous and discontinuous Galerkin time stepping schemes for general first-order initial value problems in R n , with continuous nonlinearities. We briefly review a recent existence result for discrete solutions from [6], and provide a numerical comparison of the two time discretization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides new sufficient conditions for the existence, computation via successive approximations, and stability of Markovian equilibrium decision processes for a large class of OLG models with stochastic nonclassical production. Our notion of stability is existence of stationary Markovian equilibrium. With a nonclassical production, our economies encompass a large class of OLG models with public policy, valued fiat money, production externalities, and Markov shocks to production. Our approach combines aspects of both topological and order theoretic fixed point theory, and provides the basis of globally stable numerical iteration procedures for computing extremal Markovian equilibrium objects. In addition to new theoretical results on existence and computation, we provide some monotone comparative statics results on the space of economies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The difficulty of detecting differential gene expression in microarray data has existed for many years. Several correction procedures try to avoid the family-wise error rate in multiple comparison process, including the Bonferroni and Sidak single-step p-value adjustments, Holm's step-down correction method, and Benjamini and Hochberg's false discovery rate (FDR) correction procedure. Each multiple comparison technique has its advantages and weaknesses. We studied each multiple comparison method through numerical studies (simulations) and applied the methods to the real exploratory DNA microarray data, which detect of molecular signatures in papillary thyroid cancer (PTC) patients. According to our results of simulation studies, Benjamini and Hochberg step-up FDR controlling procedure is the best process among these multiple comparison methods and we discovered 1277 potential biomarkers among 54675 probe sets after applying the Benjamini and Hochberg's method to PTC microarray data.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many situations where input feature vectors are incomplete and methods to tackle the problem have been studied for a long time. A commonly used procedure is to replace each missing value with an imputation. This paper presents a method to perform categorical missing data imputation from numerical and categorical variables. The imputations are based on Simpson’s fuzzy min-max neural networks where the input variables for learning and classification are just numerical. The proposed method extends the input to categorical variables by introducing new fuzzy sets, a new operation and a new architecture. The procedure is tested and compared with others using opinion poll data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El retroceso de las costas acantiladas es un fenómeno muy extendido sobre los litorales rocosos expuestos a la incidencia combinada de los procesos marinos y meteorológicos que se dan en la franja costera. Este fenómeno se revela violentamente como movimientos gravitacionales del terreno esporádicos, pudiendo causar pérdidas materiales y/o humanas. Aunque el conocimiento de estos riesgos de erosión resulta de vital importancia para la correcta gestión de la costa, el desarrollo de modelos predictivos se encuentra limitado desde el punto de vista geomorfológico debido a la complejidad e interacción de los procesos de desarrollo espacio-temporal que tienen lugar en la zona costera. Los modelos de predicción publicados son escasos y con importantes inconvenientes: a) extrapolación, extienden la información de registros históricos; b) empíricos, sobre registros históricos estudian la respuesta al cambio de un parámetro; c) estocásticos, determinan la cadencia y magnitud de los eventos futuros extrapolando las distribuciones de probabilidad extraídas de catálogos históricos; d) proceso-respuesta, de estabilidad y propagación del error inexplorada; e) en Ecuaciones en Derivadas Parciales, computacionalmente costosos y poco exactos. La primera parte de esta tesis detalla las principales características de los modelos más recientes de cada tipo y, para los más habitualmente utilizados, se indican sus rangos de aplicación, ventajas e inconvenientes. Finalmente como síntesis de los procesos más relevantes que contemplan los modelos revisados, se presenta un diagrama conceptual de la recesión costera, donde se recogen los procesos más influyentes que deben ser tenidos en cuenta, a la hora de utilizar o crear un modelo de recesión costera con el objetivo de evaluar la peligrosidad (tiempo/frecuencia) del fenómeno a medio-corto plazo. En esta tesis se desarrolla un modelo de proceso-respuesta de retroceso de acantilados costeros que incorpora el comportamiento geomecánico de materiales cuya resistencia a compresión no supere los 5 MPa. El modelo simula la evolución espaciotemporal de un perfil-2D del acantilado que puede estar formado por materiales heterogéneos. Para ello, se acoplan la dinámica marina: nivel medio del mar, cambios en el nivel medio del lago, mareas y oleaje; con la evolución del terreno: erosión, desprendimiento rocoso y formación de talud de derrubios. El modelo en sus diferentes variantes es capaz de incluir el análisis de la estabilidad geomecánica de los materiales, el efecto de los derrubios presentes al pie del acantilado, el efecto del agua subterránea, la playa, el run-up, cambios en el nivel medio del mar o cambios (estacionales o interanuales) en el nivel medio de la masa de agua (lagos). Se ha estudiado el error de discretización del modelo y su propagación en el tiempo a partir de las soluciones exactas para los dos primeros periodos de marea para diferentes aproximaciones numéricas tanto en tiempo como en espacio. Los resultados obtenidos han permitido justificar las elecciones que minimizan el error y los métodos de aproximación más adecuados para su posterior uso en la modelización. El modelo ha sido validado frente a datos reales en la costa de Holderness, Yorkshire, Reino Unido; y en la costa norte del lago Erie, Ontario, Canadá. Los resultados obtenidos presentan un importante avance en los modelos de recesión costera, especialmente en su relación con las condiciones geomecánicas del medio, la influencia del agua subterránea, la verticalización de los perfiles rocosos y su respuesta ante condiciones variables producidas por el cambio climático (por ejemplo, nivel medio del mar, cambios en los niveles de lago, etc.). The recession of coastal cliffs is a widespread phenomenon on the rocky shores that are exposed to the combined incidence of marine and meteorological processes that occur in the shoreline. This phenomenon is revealed violently and occasionally, as gravitational movements of the ground and can cause material or human losses. Although knowledge of the risks of erosion is vital for the proper management of the coast, the development of cliff erosion predictive models is limited by the complex interactions between environmental processes and material properties over a range of temporal and spatial scales. Published prediction models are scarce and present important drawbacks: extrapolation, that extend historical records to the future; empirical, that based on historical records studies the system response against the change in one parameter; stochastic, that represent of cliff behaviour based on assumptions regarding the magnitude and frequency of events in a probabilistic framework based on historical records; process-response, stability and error propagation unexplored; PDE´s, highly computationally expensive and not very accurate. The first part of this thesis describes the main features of the latest models of each type and, for the most commonly used, their ranges of application, advantages and disadvantages are given. Finally as a synthesis of the most relevant processes that include the revised models, a conceptual diagram of coastal recession is presented. This conceptual model includes the most influential processes that must be taken into account when using or creating a model of coastal recession to evaluate the dangerousness (time/frequency) of the phenomenon to medium-short term. A new process-response coastal recession model developed in this thesis has been designed to incorporate the behavioural and mechanical characteristics of coastal cliffs which are composed of with materials whose compressive strength is less than 5 MPa. The model simulates the spatial and temporal evolution of a cliff-2D profile that can consist of heterogeneous materials. To do so, marine dynamics: mean sea level, waves, tides, lake seasonal changes; is coupled with the evolution of land recession: erosion, cliff face failure and associated protective colluvial wedge. The model in its different variants can include analysis of material geomechanical stability, the effect of debris present at the cliff foot, groundwater effects, beach and run-up effects, changes in the mean sea level or changes (seasonal or inter-annual) in the mean lake level. Computational implementation and study of different numerical resolution techniques, in both time and space approximations, and the produced errors are exposed and analysed for the first two tidal periods. The results obtained in the errors analysis allow us to operate the model with a configuration that minimizes the error of the approximation methods. The model is validated through profile evolution assessment at various locations of coastline retreat on the Holderness Coast, Yorkshire, UK and on the north coast of Lake Erie, Ontario, Canada. The results represent an important stepforward in linking material properties to the processes of cliff recession, in considering the effect of groundwater charge and the slope oversteeping and their response to changing conditions caused by climate change (i.e. sea level, changes in lakes levels, etc.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos