967 resultados para rigorous results in statistical mechanics
Resumo:
OBJECTIVES To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. METHODS The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. RESULTS From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). CONCLUSIONS The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry.
Resumo:
Synopsis: Sport organisations are facing multiple challenges originating from an increasingly complex and dynamic environment in general, and from internal changes in particular. Our study seeks to reveal and analyse the causes for professionalization processes in international sport federations, the forms resulting from it, as well as related consequences. Abstract: AIM OF ABSTRACT/PAPER - RESEARCH QUESTION Sport organisations are facing multiple challenges originating from an increasingly complex and dynamic environment in general, and from internal changes in particular. In this context, professionalization seems to have been adopted by sport organisations as an appropriate strategy to respond to pressures such as becoming more “business-like”. The ongoing study seeks to reveal and analyse the internal and external causes for professionalization processes in international sport federations, the forms resulting from it (e.g. organisational, managerial, economic) as well as related consequences on objectives, values, governance methods, performance management or again rationalisation. THEORETICAL BACKGROUND/LITERATURE REVIEW Studies on sport as specific non-profit sector mainly focus on the prospect of the “professionalization of individuals” (Thibault, Slack & Hinings, 1991), often within sport clubs (Thiel, Meier & Cachay, 2006) and national sport federations (Seippel, 2002) or on organisational change (Griginov & Sandanski, 2008; Slack & Hinings, 1987, 1992; Slack, 1985, 2001), thus leaving broader analysis on governance, management and professionalization in sport organisations an unaccomplished task. In order to further current research on above-mentioned topics, our intention is to analyse causes, forms and consequences of professionalisation processes in international sport federations. The social theory of action (Coleman, 1986; Esser, 1993) has been defined as appropriate theoretical framework, deriving in the following a multi-level framework for the analysis of sport organisations (Nagel, 2007). In light of the multi-level framework, sport federations are conceptualised as corporative actors whose objectives are defined and implemented with regard to the interests of member organisations (Heinemann, 2004) and/or other pressure groups. In order to understand social acting and social structures (Giddens 1984) of sport federations, two levels are in the focus of our analysis: the macro level examining the environment at large (political, social, economic systems etc.) and the meso level (Esser, 1999) examining organisational structures, actions and decisions of the federation’s headquarter as well as member organisations. METHODOLOGY, RESEARCH DESIGN AND DATA ANALYSIS The multi-level framework mentioned seeks to gather and analyse information on causes, forms and consequences of professionalization processes in sport federations. It is applied in a twofold approach: first an exploratory study based on nine semi-structured interviews with experts from umbrella sport organisations (IOC, WADA, ASOIF, AIOWF, etc.) as well as the analysis of related documents, relevant reports (IOC report 2000 on governance reform, Agenda 2020, etc.) and important moments of change in the Olympic Movement (Olympic revenue share, IOC evaluation criteria, etc.); and secondly several case studies. Whereas the exploratory study seeks more the causes for professionalization on an external, internal and headquarter level as depicted in the literature, the case studies rather focus on forms and consequences. Applying our conceptual framework, the analysis of forms is built around three dimensions: 1) Individuals (persons and positions), 2) Processes, structures (formalisation, specialisation), 3) Activities (strategic planning). With regard to consequences, we centre our attention on expectations of and relationships with stakeholders (e.g. cooperation with business partners), structure, culture and processes (e.g. governance models, performance), and expectations of and relationships with member organisations (e.g. centralisation vs. regionalisation). For the case studies, a mixed-method approach is applied to collect relevant data: questionnaires for rather quantitative data, interviews for rather qualitative data, as well as document and observatory analysis. RESULTS, DISCUSSION AND IMPLICATIONS/CONCLUSIONS With regard to causes of professionalization processes, we analyse the content of three different levels: 1. the external level, where the main pressure derives from financial resources (stakeholders, benefactors) and important turning points (scandals, media pressure, IOC requirements for Olympic sports); 2. the internal level, where pressure from member organisations turned out to be less decisive than assumed (little involvement of member organisations in decision-making); 3. the headquarter level, where specific economic models (World Cups, other international circuits, World Championships), and organisational structures (decision-making procedures, values, leadership) trigger or hinder a federation’s professionalization process. Based on our first analysis, an outline for an economic model is suggested, distinguishing four categories of IFs: “money-generating IFs” being rather based on commercialisation and strategic alliances; “classical Olympic IFs” being rather reactive and dependent on Olympic revenue; “classical non-Olympic IFs” being rather independent of the Olympic Movement; and “money-receiving IFs” being dependent on benefactors and having strong traditions and values. The results regarding forms and consequences will be outlined in the presentation. The first results from the two pilot studies will allow us to refine our conceptual framework for subsequent case studies, thus extending our data collection and developing fundamental conclusions. References: Bayle, E., & Robinson, L. (2007). A framework for understanding the performance of national governing bodies of sport. European Sport Management Quarterly, 7, 249–268 Chantelat, P. (2001). La professionnalisation des organisations sportives: Nouveaux débats, nouveaux enjeux [Professionalisation of sport organisations]. Paris: L’Harmattan. Dowling, M., Edwards, J., & Washington, M. (2014). Understanding the concept of professionalization in sport management research. Sport Management Review. Advance online publication. doi: 10.1016/j.smr.2014.02.003 Ferkins, L. & Shilbury, D. (2012). Good Boards Are Strategic: What Does That Mean for Sport Governance? Journal of Sport Management, 26, 67-80. Thibault, L., Slack, T., & Hinings, B. (1991). Professionalism, structures and systems: The impact of professional staff on voluntary sport organizations. International Review for the Sociology of Sport, 26, 83–97.
Resumo:
BACKGROUND Continuous venovenous hemodialysis (CVVHD) may generate microemboli that cross the pulmonary circulation and reach the brain. The aim of the present study was to quantify (load per time interval) and qualify (gaseous vs. solid) cerebral microemboli (CME), detected as high-intensity transient signals, using transcranial Doppler ultrasound. MATERIALS AND METHODS Twenty intensive care unit (ICU group) patients requiring CVVHD were examined. CME were recorded in both middle cerebral arteries for 30 minutes during CVVHD and a CVVHD-free interval. Twenty additional patients, hospitalized for orthopedic surgery, served as a non-ICU control group. Statistical analyses were performed using the Mann-Whitney U test or the Wilcoxon matched-pairs signed-rank test, followed by Bonferroni corrections for multiple comparisons. RESULTS In the non-ICU group, 48 (14.5-169.5) (median [range]) gaseous CME were detected. In the ICU group, the 67.5 (14.5-588.5) gaseous CME detected during the CVVHD-free interval increased 5-fold to 344.5 (59-1019) during CVVHD (P<0.001). The number of solid CME was low in all groups (non-ICU group: 2 [0-5.5]; ICU group CVVHD-free interval: 1.5 [0-14.25]; ICU group during CVVHD: 7 [3-27.75]). CONCLUSIONS This observational pilot study shows that CVVHD was associated with a higher gaseous but not solid CME burden in critically ill patients. Although the differentiation between gaseous and solid CME remains challenging, our finding may support the hypothesis of microbubble generation in the CVVHD circuit and its transpulmonary translocation toward the intracranial circulation. Importantly, the impact of gaseous and solid CME generated during CVVHD on brain integrity of critically ill patients currently remains unknown and is highly debated.
Resumo:
Internet-based cognitive behavioral self-help treatment (ICBT) for anxiety disorders has shown promising results in several trials, but there is yet a lack of studies of ICBT in „real world” primary care settings. In this randomized controlled trial we recruited participants through general practitioners. The aim of the study was to examine whether treatment-as-usual (TAU) in primary care settings plus ICBT is superior to TAU alone in reducing anxiety symptoms and other outcome measures among individuals meeting diagnostic criteria of a least one of three anxiety disorders (social anxiety disorder, panic disorder with or without agoraphobia, generalized anxiety disorder). 150 adults fulfilling diagnostic criteria for a least one of the anxiety disorders according to a diagnostic interview are randomly assigned to one of the two conditions: TAU plus ICBT versus TAU. Randomization is stratified by primary disorder, medication (yes/no) and concurrent psychotherapy. ICBT consists of a transdiagnostic and tailored Internet-based self-help program for several anxiety disorders which also includes cognitive bias modification for interpretation (CBM-I). Primary outcomes are symptoms of disorder-specific anxiety measures and diagnostic status after the intervention (9 weeks). Secondary outcomes include primary outcomes at 3-month follow-up and secondary measures such as general symptomatology, depression, quality of life, adherence to ICBT and satisfaction with ICBT. The study is currently being completed. Primary results along with results for specific subgroups (e.g. primary diagnosis, concurrent medication and/or psychotherapy) will be presented and discussed.
Resumo:
The Financial Accounting Standards Board (FASB) mandated the expensing of stock options with FAS 123 (R). As of March 2006, 749 companies had accelerated the vesting of their employee stock options and avoided a reduction in their reported profits that otherwise would have occurred under the new standard. There are many different motives for the acceleration strategy, and the focus of this study is to determine whether shareholders viewed these motives as either positive or negative. A favorable return subsequent to an acceleration announcement would signify that shareholder's viewed management's motives as positive. An unfavorable return subsequent to an acceleration announcement would signify that shareholder's viewed management's motives as negative. The evidence from this study suggests that shareholders reacted favorably, on average, to acceleration announcements. However, these results lack statistical significance and are based on a small sample, thus, they should be interpreted with caution.
Resumo:
Although Pap screening has decreased morbidity and mortality from cervical cancer, reported statistics indicate that among ethnic groups, Hispanic women are one of the least likely to follow screening guidelines. Human papillomavirus (HPV), a major risk factor for cervical cancer, as well as pre-cancerous lesions, may be detected by early Pap screening. With a reported 43% prevalence of HPV infection in college women, regular Pap screening is important. The purpose of this descriptive, cross-sectional survey was to examine self-reported cervical cancer screening rates in a target population of primarily Mexican-American college women, and to discover if recognized correlates for screening behavior explained differences in screening rates between this and two other predominant groups on the University of Houston Downtown campus, non-Hispanic white and African-American. The sample size consisted of 613 women recruited from summer 2003 classes. A survey, adapted from an earlier El Paso study, and based on constructs of the Health Belief Model (HBM), was administered to women ages 18 and older. It was found that although screening rates were similar across ethnic groups, overall, the Hispanic group obtained screening less frequently, though this did not reach statistical significance. However, a significant difference in lower screening rates was found in Mexican American women ages <25. Additionally, of the predicted correlates, the construct of perceived barriers from the HBM was most significant for the Mexican American group for non-screening. For all groups, knowledge about cervical cancer was negatively correlated with ever obtaining Pap screening and screening within the past year. This implies that if health counseling is given at the time of women's screening visits, both adherence to appropriate screening intervals and risk factor avoidance may be more likely. Studies such as these are needed to address both screening behaviors and likelihood of follow-up for abnormal results in populations of multicultural, urban college women. ^
Resumo:
Background and purpose. Brain lesions in acute ischemic stroke measured by imaging tools provide important clinical information for diagnosis and final infarct volume has been considered as a potential surrogate marker for clinical outcomes. Strong correlations have been found between lesion volume and clinical outcomes in the NINDS t-PA Stroke Trial but little has been published about lesion location and clinical outcomes. Studies of the National Institute of Neurological Disorders and Stroke (NINDS) t-PA Stroke Trial data found the direction of the t-PA treatment effect on a decrease in CT lesion volume was consistent with the observed clinical effects at 3 months, but measure of t-PA treatment benefits using CT lesion volumes showed a diminished statistical significance, as compared to using clinical scales. ^ Methods. We used the global test to evaluate the hypothesis that lesion locations were strongly associated with clinical outcomes within each treatment group at 3 months after stroke. The anatomic locations of CT scans were used for analysis. We also assessed the effect of t-PA on lesion location using a global statistical test. ^ Results. In the t-PA group, patients with frontal lesions had larger infarct volumes and worse NIHSS score at 3 months after stroke. The clinical status of patients with frontal lesions in t-PA group was less likely to be affected by lesion volume, as compared to those who had no frontal lesions in at 3 months. For patients within the placebo group, both brain stem and internal capsule locations were significantly associated with a lower odd of having favorable outcomes at 3 months. Using a global test we could not detect a significant effect of t-PA treatment on lesion location although differences between two treatment groups in the proportion of lesion findings in each location were found. ^ Conclusions. Frontal, brain stem, and internal capsule locations were significantly related to clinical status at 3 months after stroke onset. We detect no significant t-PA effect on all 9 locations although proportion of lesion findings in differed among locations between the two treatment groups.^
Resumo:
The primary Mg/Ca ratio of foraminiferal shells is a potentially valuable paleoproxy for sea surface temperature (SST) reconstructions. However, the reliable extraction of this ratio from sedimentary calcite assumes that we can overcome artifacts related to foraminiferal ecology and partial dissolution, as well as contamination by secondary calcite and clay. The standard batch method for Mg/Ca analysis involves cracking, sonicating, and rinsing the tests to remove clay, followed by chemical cleaning, and finally acid-digestion and single-point measurement. This laborious procedure often results in substantial loss of sample (typically 30-60%). We find that even the earliest steps of this procedure can fractionate Mg from Ca, thus biasing the result toward a more variable and often anomalously low Mg/Ca ratio. Moreover, the more rigorous the cleaning, the more calcite is lost, and the more likely it becomes that any residual clay that has not been removed by physical cleaning will increase the ratio. These potentially significant sources of error can be overcome with a flow-through (FT) sequential leaching method that makes time- and labor-intensive pretreatments unnecessary. When combined with time-resolved analysis (FT-TRA) flow-through, performed with a gradually increasing and highly regulated acid strength, produces continuous records of Mg, Sr, Al, and Ca concentrations in the leachate sorted by dissolution susceptibility of the reacting material. Flow-through separates secondary calcite from less susceptible biogenic calcite and clay, and further resolves the biogenic component into primary and more resistant fractions. FT-TRA reliably separates secondary calcite (which is not representative of original life habitats) from the more resistant biogenic calcite (the desired signal) and clay (a contaminant of high Mg/Ca, which also contains Al), and further resolves the biogenic component into primary and more resistant fractions that may reflect habitat or other changes during ontogeny. We find that the most susceptible fraction of biogenic calcite in surface dwelling foraminifera gives the most accurate value for SST and therefore best represents primary calcite. Sequential dissolution curves can be used to correct the primary Mg/Ca ratio for clay, if necessary. However, the temporal separation of calcite from clay in FT-TRA is so complete that this correction is typically <=2%, even in clay-rich sediments. Unlike hands-on batch methods, that are difficult to reproduce exactly, flow-through lends itself to automation, providing precise replication of treatment for every sample. Our automated flow-through system can process 22 samples, two system blanks, and 48 mixed standards in <12 hours of unattended operation. FT-TRA thus represents a faster, cheaper, and better way to determine Mg/Ca ratios in foraminiferal calcite.
Resumo:
Parameters in the photosynthesis-irradiance (P-E) relationship of phytoplankton were measured at weekly to bi-weekly intervals for 20 yr at 6 stations on the Rhode River, Maryland (USA). Variability in the light-saturated photosynthetic rate, PBmax, was partitioned into interannual, seasonal, and spatial components. The seasonal component of the variance was greatest, followed by interannual and then spatial. Physiological models of PBmax based on balanced growth or photoacclimation predicted the overall mean and most of the range, but not individual observations, and failed to capture important features of the seasonal and interannual variability. PBmax correlated most strongly with temperature and the concentration of dissolved inorganic carbon (IC), with lesser correlations with chlorophyll a, diffuse attenuation coefficient, and a principal component of the species composition. In statistical models, temperature and IC correlated best with the seasonal pattern, but temperature peaked in late July, out of phase with PBmax, which peaked in September, coincident with the maximum in monthly averaged IC concentration. In contrast with the seasonal pattern, temperature did not contribute to interannual variation, which instead was governed by IC and the additional lesser correlates. Spatial variation was relatively weak and uncorrelated with ancillary measurements. The results demonstrate that both the overall distribution of PBmax and its relationship with environmental correlates may vary from year to year. Coefficients in empirical statistical models became stable after including 7 to 10 yr of data. The main correlates of PBmax are amenable to automated monitoring, so that future estimates of primary production might be made without labor-intensive incubations.
Resumo:
This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.
Resumo:
Following the success achieved in previous research projects usin non-destructive methods to estimate the physical and mechanical aging of particle and fibre boards, this paper studies the relationships between aging, physical and mechanical changes, using non-destructive measurements of oriented strand board (OSB). 184 pieces of OSB board from a French source were tested to analyze its actual physical and mechanical properties. The same properties were estimated using acoustic non-destructive methods (ultrasound and stress wave velocity) during a physical laboratory aging test. Measurements were recorded of propagation wave velocity with the sensors aligned, edge to edge, and forming an angle of 45 degrees, with both sensors on the same face of the board. This is because aligned measures are not possible on site. The velocity results are always higher in 45 degree measurements. Given the results of statistical analysis, it can be concluded that there is a strong relationship between acoustic measurements and the decline in physical and mechanical properties of the panels due to aging. The authors propose several models to estimate the physical and mechanical properties of board, as well as their degree of aging. The best results are obtained using ultrasound, although the difference in comparison with the stress wave method is not very significant. A reliable prediction of the degree of deterioration (aging) of board is presented.
Resumo:
This paper studies the relationship between aging, physical changes and the results of non-destructive testing of plywood. 176 pieces of plywood were tested to analyze their actual and estimated density using non-destructive methods (screw withdrawal force and ultrasound wave velocity) during a laboratory aging test. From the results of statistical analysis it can be concluded that there is a strong relationship between the non-destructive measurements carried out, and the decline in the physical properties of the panels due to aging. The authors propose several models to estimate board density. The best results are obtained with ultrasound. A reliable prediction of the degree of deterioration (aging) of board is presented. Breeder blanket materials have to produce tritium from lithium while fulfilling several strict conditions. In particular, when dealing with materials to be applied in fusion reactors, one of the key questions is the study of light ions retention, which can be produced by transmutation reactions and/or introduced by interaction with the plasma. In ceramic breeders the understanding of the hydrogen isotopes behaviour and specially the diffusion of tritium to the surface is crucial. Moreover the evolution of the microstructure during irradiation with energetic ions, neutrons and electrons is complex because of the interaction of a high number of processes.
Resumo:
Ontologies and taxonomies are widely used to organize concepts providing the basis for activities such as indexing, and as background knowledge for NLP tasks. As such, translation of these resources would prove useful to adapt these systems to new languages. However, we show that the nature of these resources is significantly different from the "free-text" paradigm used to train most statistical machine translation systems. In particular, we see significant differences in the linguistic nature of these resources and such resources have rich additional semantics. We demonstrate that as a result of these linguistic differences, standard SMT methods, in particular evaluation metrics, can produce poor performance. We then look to the task of leveraging these semantics for translation, which we approach in three ways: by adapting the translation system to the domain of the resource; by examining if semantics can help to predict the syntactic structure used in translation; and by evaluating if we can use existing translated taxonomies to disambiguate translations. We present some early results from these experiments, which shed light on the degree of success we may have with each approach
Resumo:
The Universidad Politécnica of Madrid (UPM) includes schools and faculties that were for engineering degrees, architecture and computer science, that are now in a quick EEES Bolonia Plan metamorphosis getting into degrees, masters and doctorate structures. They are focused towards action in machines, constructions, enterprises, that are subjected to machines, human and environment created risks. These are present in actions such as use loads, wind, snow, waves, flows, earthquakes, forces and effects in machines, vehicles behavior, chemical effects, and other environmental factors including effects of crops, cattle and beasts, forests, and varied essential economic and social disturbances. Emphasis is for authors in this session more about risks of natural origin, such as for hail, winds, snow or waves that are not exactly known a priori, but that are often considered with statistical expected distributions giving extreme values for convenient return periods. These distributions are known from measures in time, statistic of extremes and models about hazard scenarios and about responses of man made constructions or devices. In each engineering field theories were built about hazards scenarios and how to cover for important risks. Engineers must get that the systems they handle, such as vehicles, machines, firms or agro lands or forests, obtain production with enough safety for persons and with decent economic results in spite of risks. For that risks must be considered in planning, in realization and in operation, and safety margins must be taken but at a reasonable cost. That is a small level of risks will often remain, due to limitations in costs or because of due to strange hazards, and maybe they will be covered by insurance in cases such as in transport with cars, ships or aircrafts, in agro for hail, or for fire in houses or in forests. These and other decisions about quality, security for men or about business financial risks are sometimes considered with Decision Theories models, using often tools from Statistics or operational Research. The authors have done and are following field surveys about risk consideration in the careers in UPM, making deep analysis of curricula taking into account the new structures of degrees in the EEES Bolonia Plan, and they have considered the risk structures offered by diverse schools of Decision theories. That gives an aspect of the needs and uses, and recommendations about improving in the teaching about risk, that may include special subjects especially oriented for each career, school or faculty, so as to be recommended to be included into the curricula, including an elaboration and presentation format using a multi-criteria decision model.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos