30 resultados para DISCREPANCIES
em Universidad Politécnica de Madrid
Study of rapid ionisation for simulation of soft X-ray lasers with the 2D hydro-radiative code ARWEN
Resumo:
We present our fast ionisation routine used to study transient softX-raylasers with ARWEN, a two-dimensional hydrodynamic code incorporating adaptative mesh refinement (AMR) and radiative transport. We compute global rates between ion stages assuming an effective temperature between singly-excited levels of each ion. A two-step method is used to obtain in a straightforward manner the variation of ion populations over long hydrodynamic time steps. We compare our model with existing theoretical results both stationary and transient, finding that the discrepancies are moderate except for large densities. We simulate an existing Molybdenum Ni-like transient softX-raylaser with ARWEN. Use of the fast ionisation routine leads to a larger increase in temperature and a larger gain zone than when LTE datatables are used.
Resumo:
In this Comment we explain the discrepancies mentioned by the authors between their results and ours about the in?uence of the gravitational quadrupole moment in the perturbative calculation of corrections to the precession of the periastron of quasielliptical Keplerian equatorial orbits around a point mass. The discrepancy appears to be a consequence of two different calculations of the angular momentum of the orbits.
Resumo:
Sunrise is a solar telescope, successfully flown in June 2009 with a long duration balloon from the Swedish Space Corporation Esrange launch site. The design of the thermal control of SUNRISE was quite critical because of the sensitivity to temperature of the optomechanical devices and the electronics. These problems got more complicated due the size and high power dissipation of the system. A detailed thermal mathematical model of SUNRISE was set up to predict temperatures. In this communication the thermal behaviour of SUNRISE during flight is presented. Flight temperatures of some devices are presented and analysed. The measured data have been compared with the predictions given by the thermal mathematical models. The main discrepancies between flight data and the temperatures predicted by the models have been identified. This allows thermal engineers to improve the knowledge of the thermal behaviour of the system for future missions.
Resumo:
A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods produce significantly reach different results. These discrepancies make decisions on the targeting of resource allocations difficult. To assist decision makers, a new method taking into account the dimension of hunger and the coping capacities of countries, is proposed enabling to establish both geographical and sectoral priorities for the allocation of resources.
Resumo:
Culverts are very common in recent railway lines. Wild life corridors and drainage conducts often fall in this category of partially buried structures. Their dynamic behavior has received far less attention than other structures such as bridges but its large number makes that study an interesting challenge from the point of view of safety and savings. In this paper a complete study of a culvert, including on-site measurements as well as numerical modelling, will be presented. The structure belongs to the high speed railway line linking Segovia and Valladolid, in Spain. The line was opened to traffic in 2004. Its dimensions (3x3m) are the most frequent along the line. Other factors such as reduced overburden (0.6m) and an almost right angle with the track axis make it an interesting example to extract generalized conclusions. On site measurements have been performed in the structure recording the dynamic response at selected points of the structure during the passage of high speed trains at speeds ranging between 200 and 300km/h. The measurements by themselves provide a good insight into the main features of the dynamic behaviour of the structure. A 3D finite element model of the structure, representing its key features was also studied as it allows further understanding of the dynamic response to the train loads . In the paper the discrepancies between predicted and measured vibration levels will be analyzed and some advices on numerical modelling will be proposed
Resumo:
A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods reach significantly different results. These discrepancies make decisions on the targeting of resource allocations difficult. To assist decision makers, a new method taking into account the dimension of hunger and the coping capacities of countries is proposed enabling to establish both geographical and sectoral priorities for the allocation of resources
Resumo:
Introducción. El número de personas que padecen síndrome metabólico ha incrementado a nivel mundial durante las últimas dos décadas. Existen numerosos estudios que tratan de comparar prevalencias según los diferentes criterios y estimaciones del riesgo metabólico. De ellos se puede concluir que el principal hallazgo ha sido recalcar la necesidad de una definición estándar universal. A pesar de estas discrepancias no hay lugar a duda sobre el problema de salud pública que esto conlleva. Se necesitan medidas y estrategias urgentes para prevenir y controlar esta emergente epidemia global y para ello se debe prestar especial atención a los cambios en el estilo de vida, fundamentalmente dieta y ejercicio. A pesar de todo, existe a día de hoy una importante controversia sobre el tipo de ejercicio más efectivo y su combinación con la dieta para conseguir mejoras en la salud. Objetivos. Estudiar los índices de riesgo metabólico empleados en la literatura científica y las terapias basadas en dieta y ejercicio para el tratamiento de los factores del síndrome metabólico en adultos con sobrepeso. Diseño de investigación. Los datos empleados en el análisis de esta tesis son, primeramente un estudio piloto, y posteriormente parte del estudio “Programas de Nutrición y Actividad Física para el tratamiento de la obesidad” (PRONAF). El estudio PRONAF es un proyecto consistente en un estudio clínico sobre programas de nutrición y actividad física para el sobrepeso y la obesidad, desarrollado en España durante varios años de intervenciones. Fue diseñado, en parte, para tratar de comparar protocolos de entrenamiento de resistencia, cargas y combinado en igualdad de volumen e intensidad, con el objetivo de evaluar su impacto en los factores de riesgo y la prevalencia del síndrome metabólico en personas con sobrepeso y obesidad. El diseño experimental es un control aleatorio y el protocolo incluye 3 modos de ejercicio (entrenamiento de resistencia, con cargas y combinado) y restricción dietética sobre diversas variables determinantes del estado de salud. Las principales variables para la investigación que comprende esta tesis fueron: actividad física habitual, marcadores de grasa corporal, niveles de insulina, glucosa, triglicéridos, colesterol total, colesterol HDL, colesterol LDL, presión arterial y parámetros relacionados con el ejercicio. Conclusiones. A) Los índices de riesgo metabólico estudiados presentan resultados contradictorios en relación al riesgo metabólico en un individuo, dependiendo de los métodos matemáticos empleados para el cálculo y de las variables introducidas, tanto en mujeres sanas como en adultos en sobrepeso. B) El protocolo de entrenamiento combinado (de cargas y de resistencia) junto con la dieta equilibrada propuesto en este estudio fue la mejor estrategia para la mejora del riesgo de síndrome metabólico en adultos con sobrepeso. C) Los protocolos de entrenamiento supervisado de resistencia, con cargas y combinado junto con la restricción nutricional, no obtuvieron mejoras sobre el perfil lipídico, más allá de los cambios conseguidos con el protocolo de dieta y recomendaciones generales de actividad física habitual en clínica, en adultos con sobrepeso. Background. Over the past two decades, a striking increase in the number of people with the MetS worldwide has taken place. Many studies compare prevalences using different criteria and metabolic risk estimation formulas, and perhaps their main achievement is to reinforce the need for a standardized international definition. Although these discrepancies, there is no doubt it is a public health problem. There is urgent need for strategies to prevent and manage the emerging global epidemic, special consideration should be given to behavioral and lifestyle, mainly diet and exercise. However, there is still controversy about the most effective type of exercise and diet combination to achieve improvements. Objectives. To study the metabolic risk scores used in the literature and the diet and exercise therapies for the treatment of the MetS factors in overweight adults. Research design. The data used in the analysis was collected firstly in a pilot study and lately, as a part of the “Programas de Nutrición y Actividad física para el tratamiento de la obesidad” study (PRONAF). The PRONAF Study is a clinical research project in nutrition and physical activity programs for overweight and obesity, carried out in Spain (2008-2011). Was designed, in part, to attempt to match the volume and intensity of endurance, strength and combined training protocols in order to evaluate their impact on risk factors and MetS prevalence in overweight and obese people. The design and protocol included three exercise modes (endurance, strength and combined training) and diet restriction, in a randomized controlled trial concerning diverse health status variables. The main variables under investigation were habitual physical activity, markers of body fat, fasting serum levels of insulin, glucose, triglycerides, total, LDL and HDL cholesterol, blood pressure and diet and exercise parameters. Main outcomes. A) The metabolic risk scores studied presented contradictory results in relation to the metabolic risk of an individual, depending on the mathematical method used and the variables included, both in healthy women and overweight adults. B) The protocol proposed for combination of strength and endurance training combined with a balance diet was the optimal strategy for the improvement of MetS risk in overweight adults. C) The intervention program of endurance, strength or combined supervised training protocol with diet restriction did not achieved further improvements in lipid profile than a habitual clinical practice protocol including dietary advice and standard physical activity recommendations, in overweight adults.
Resumo:
As the number of data sources publishing their data on the Web of Data is growing, we are experiencing an immense growth of the Linked Open Data cloud. The lack of control on the published sources, which could be untrustworthy or unreliable, along with their dynamic nature that often invalidates links and causes conflicts or other discrepancies, could lead to poor quality data. In order to judge data quality, a number of quality indicators have been proposed, coupled with quality metrics that quantify the “quality level” of a dataset. In addition to the above, some approaches address how to improve the quality of the datasets through a repair process that focuses on how to correct invalidities caused by constraint violations by either removing or adding triples. In this paper we argue that provenance is a critical factor that should be taken into account during repairs to ensure that the most reliable data is kept. Based on this idea, we propose quality metrics that take into account provenance and evaluate their applicability as repair guidelines in a particular data fusion setting.
Resumo:
Esta tesis tiene dos objetivos generales: el primero, analizar el uso de proteínas del endospermo y SSRs para la racionalización de las colecciones de trigo, y el segundo, estudiar la influencia de las proteínas del endospermo, del año de cultivo y del abonado nitrogenado en la calidad en un grupo de variedades locales españolas. Dentro del primer objetivo, se estudió la diversidad genética de la colección de Triticum monococcum L. (escaña menor), y de una muestra de la colección de Triticum turgidum L. (trigo duro) del CRF-INIA, con 2 y 6 loci de gliadinas, y 6 y 24 SSRs, para la escaña menor y el trigo duro, respectivamente. Ambas colecciones presentaron una gran diversidad genética, con una gran diferenciación entre las variedades y pequeña dentro de ellas. Los loci de gliadinas mostraron una gran variabilidad, siendo los loci Gli-2 los más útiles para distinguir variedades. En la escaña menor, las gliadinas presentaron mayor poder de discriminación que los SSRs; aunque en trigo duro los SSRs identificaron más genotipos. El número de alelos encontrado fue alto; 24 y 38 en gliadinas, y 29 y 203 en SSRs, en escaña menor y trigo duro, respectivamente. En trigo duro, se identificaron 17 alelos nuevos de gliadinas lo que demuestra que el germoplasma español es muy singular. En ambas especies, se detectaron asociaciones entre la variación alélica en prolaminas y el origen geográfico y filogenético de las variedades. La utilidad de las proteínas (6 loci de gliadinas, 2 loci de gluteninas y proteína total) y de los SSRs (24 loci) para verificar duplicados, y analizar la variabilidad intraaccesión, se estudió en 23 casos de duplicados potenciales de trigo duro. Los resultados indicaron que tanto los biotipos como las accesiones duplicadas mostraban el mismo genotipo en gliadinas, pocas diferencias o ninguna en las subunidades de gluteninas HMW y proteína total, y diferencias en menos de tres loci de SSRs. El mismo resultado se obtuvo para los biotipos de la colección de T. monococcum. Sin embargo, las discrepancias observadas en algunos casos entre proteínas y SSRs demostraron la utilidad del uso conjunto de ambos tipos de marcadores. Tanto las proteínas como los SSRs mostraron gran concordancia con los caracteres agro-morfológicos, especialmente cuando las diferencias entre los genotipos eran grandes. Sin embargo, los caracteres agro-morfológicos fueron menos discriminantes que los marcadores moleculares. Para el segundo objetivo de la tesis, se analizó la variación alélica en siete loci de prolaminas relacionados con la calidad en trigo duro: Glu-A1 y Glu-B1 de gluteninas HMW, Glu-A3, Glu-B3 y Glu-B2 de gluteninas B-LMW, y Gli-A1 y Gli-B1 de gliadinas. La submuestra analizada incluía variedades locales de todas las provincias españolas donde se ha cultivado tradicionalmente el trigo duro. Todos los loci, excepto el Glu-B2, mostraron gran variabilidad genética, siendo los Glu-3 los más polimórficos. En total, se identificaron 65 alelos, de los que 29 eran nuevos, que representan una fuente importante de variabilidad genética para la mejora de la calidad. Se detectaron diferencias en la composición en prolaminas entre la convar. turgidum y la zona norte, y la convar. durum y la zona sur; el genotipo Glu-B3new-1 - Gli-B1new-1 fue muy común en la convar. turgidum, mientras que el Glu-B3a - Gli-B1c, asociado con mejor calidad, fue más frecuente en la convar. durum. En la convar. turgidum, se observó mayor variabilidad que en la convar. durum, principalmente en los loci Glu-B1 y Glu-B3, lo que indica que esta convariedad puede ser una fuente valiosa de nuevos alelos de gluteninas. Esta submuestra fue evaluada para calidad (contenido en proteína, P, y test de sedimentación, SDSS) con dos dosis de abonado nitrogenado (N), y en dos años diferentes. No se detectaron interacciones Variedad × Año, ni Variedad × N en la calidad. Para la P, los efectos ambientales (año y N) fueron mayores que el efecto de la variedad, siendo, en general, mayor la P con dosis altas de N. La variedad influyó más en el test SDSS, que no se vio afectado por el año ni el N. El aumento del contenido en proteína no influyó significativamente sobre la fuerza del gluten estimada con el SDSS. Respecto a la influencia de las prolaminas en la fuerza del gluten, se confirmó la superioridad del Glu-B3a; aunque también se detectó una influencia alta y positiva de los alelos nuevos Glu-A3new-1, y Glu-B3new-6 y new-9. La no correlación entre el rendimiento (evaluado en un trabajo anterior) y la P, en las variedades adaptadas a bajo N, permitió seleccionar cuatro variedades locales con alto rendimiento y buena fuerza del gluten para producción con bajo N. SUMMARY There are two main objectives in this thesis: The first, to analyse the use of endosperm proteins and SSRs to rationalize the wheat collections, and the second, to study the influence on quality of endosperm proteins, year and nitrogen fertilization in a group of Spanish landraces. For the first objective, we studied the genetic diversity of the collection of Triticum monococcum L. (cultivated einkorn), and of a sample of the collection of Triticum turgidum L. (durum wheat) maintained at the CRF-INIA. Two and 6 gliadin loci, and 6 and 24 SSRs, were used for einkorn and durum wheat, respectively. Both collections possessed a high genetic diversity, being the differentiation large between varieties and small within them. Gliadin loci showed great variability, being the loci Gli-2 the most useful for distinguish among varieties. In einkorn, the gliadins showed higher discrimination power than SSRs; although SSRs identified more genotypes in durum wheat. Large number of alleles were found; 24 and 38 in gliadins, and 29 and 203 in SSRs, for einkorn and durum wheat, respectively. In durum wheat, 17 new alleles of gliadins were identified, which indicate that Spanish durum wheat germplasm is rather unique. Some associations between prolamin alleles and geographical and phylogenetic origin of varieties were found in both species. The value of endosperm proteins (6 gliadin loci, 2 glutenin loci and total protein) and SSRs (24 loci) for validation of duplicates, and monitoring the intra-accession variability, was studied in 23 potential duplicates of durum wheat. The results indicated that biotypes and duplicated accessions showed identical gliadin genotype, few or none differences in HMW glutenin subunits and total protein, and less than three different SSR loci. A similar result was obtained for biotypes of T. monococcum. However, the discrepancies in some cases support the convenience to use together both marker systems. A good concordance among endosperm proteins, agro-morphological traits and SSRs were also found, mainly when differences between genotypes were high. However, agro-morphological traits discriminated less between accessions than molecular markers. For the second objective of the thesis, we analysed the allelic variation at seven prolamin loci, involved in durum wheat quality: Glu-A1 and Glu-B1 of HMW glutenin, Glu-A3, Glu-B3 and Glu-B2 of B-LMW glutenin, and Gli-A1 and Gli-B1 of gliadin. The subsample analysed included landraces from all the Spanish provinces where the crop was traditionally cultivated. All the loci, except for Glu-B2, showed high genetic variability, being Glu-3 the most polymorphic. A total of 65 alleles were studied, 29 of them being new, which represent an important source of variability for quality improvement. Differences in prolamin composition were detected between convar. turgidum and the North zone, and the convar. durum and the South zone; the genotype Glu-B3new-1 - Gli-B1new-1 was very common in the convar. turgidum, while the Glu- B3a - Gli-B1c, associated with better quality, was more frequent in the convar. durum. Higher variability was detected in the convar. turgidum than in the convar. durum, mainly at the Glu-B1 and Glu-B3, showing that this convariety could be a valuable source of new glutenin alleles. The subsample was evaluated for quality (protein content, P, and sedimentation test, SDSS) with two doses of nitrogen fertiliser (N), and in two different years. No significant Variety x Year or Variety x Nitrogen interactions were detected. For P, environmental (year and N) effects were higher than variety effect, being P values , in general, larger with high dose of N. The variety exhibited a strong influence on SDSS test, which was not affected by year and N. Increasing values of P did not significantly influence on gluten strength, estimated with the SDSS. Respect to the prolamin effects on gluten strength, the superiority of Glu-B3a was confirmed; although a high positive effect of the new alleles Glu-A3new-1, and Glu-B3new-6 and new-9 was also detected. The no correlation between yield (evaluated in a previous research) and P, in the landraces adapted to low N, allowed to select four landraces with high yield and high gluten strength for low N production.
Resumo:
Many cities in Europe have difficulties to meet the air quality standards set by the European legislation, most particularly the annual mean Limit Value for NO2. Road transport is often the main source of air pollution in urban areas and therefore, there is an increasing need to estimate current and future traffic emissions as accurately as possible. As a consequence, a number of specific emission models and emission factors databases have been developed recently. They present important methodological differences and may result in largely diverging emission figures and thus may lead to alternative policy recommendations. This study compares two approaches to estimate road traffic emissions in Madrid (Spain): the COmputer Programme to calculate Emissions from Road Transport (COPERT4 v.8.1) and the Handbook Emission Factors for Road Transport (HBEFA v.3.1), representative of the ‘average-speed’ and ‘traffic situation’ model types respectively. The input information (e.g. fleet composition, vehicle kilometres travelled, traffic intensity, road type, etc.) was provided by the traffic model developed by the Madrid City Council along with observations from field campaigns. Hourly emissions were computed for nearly 15 000 road segments distributed in 9 management areas covering the Madrid city and surroundings. Total annual NOX emissions predicted by HBEFA were a 21% higher than those of COPERT. The discrepancies for NO2 were lower (13%) since resulting average NO2/NOX ratios are lower for HBEFA. The larger differences are related to diesel vehicle emissions under “stop & go” traffic conditions, very common in distributor/secondary roads of the Madrid metropolitan area. In order to understand the representativeness of these results, the resulting emissions were integrated in an urban scale inventory used to drive mesoscale air quality simulations with the Community Multiscale Air Quality (CMAQ) modelling system (1 km2 resolution). Modelled NO2 concentrations were compared with observations through a series of statistics. Although there are no remarkable differences between both model runs, the results suggest that HBEFA may overestimate traffic emissions. However, the results are strongly influenced by methodological issues and limitations of the traffic model. This study was useful to provide a first alternative estimate to the official emission inventory in Madrid and to identify the main features of the traffic model that should be improved to support the application of an emission system based on “real world” emission factors.
Resumo:
The SMS, Simultaneous Multiple Surfaces, design was born to Nonimaging Optics applications and is now being applied also to Imaging Optics. In this paper the wave aberration function of a selected SMS design is studied. It has been found the SMS aberrations can be analyzed with a little set of parameters, sometimes two. The connection of this model with the conventional aberration expansion is also presented. To verify these mathematical model two SMS design systems were raytraced and the data were analyzed with a classical statistical methods: the plot of discrepancies and the quadratic average error. Both the tests show very good agreement with the model for our systems.
Resumo:
Use of computational fluid dynamic (CFD) methods to predict the power production from wind entire wind farms in flat and complex terrain is presented in this paper. Two full 3D Navier–Stokes solvers for incompressible flow are employed that incorporate the k–ε and k–ω turbulence models respectively. The wind turbines (W/Ts) are modelled as momentum absorbers by means of their thrust coefficient using the actuator disk approach. The WT thrust is estimated using the wind speed one diameter upstream of the rotor at hub height. An alternative method that employs an induction-factor based concept is also tested. This method features the advantage of not utilizing the wind speed at a specific distance from the rotor disk, which is a doubtful approximation when a W/T is located in the wake of another and/or the terrain is complex. To account for the underestimation of the near wake deficit, a correction is introduced to the turbulence model. The turbulence time scale is bounded using the general “realizability” constraint for the turbulent velocities. Application is made on two wind farms, a five-machine one located in flat terrain and another 43-machine one located in complex terrain. In the flat terrain case, the combination of the induction factor method along with the turbulence correction provides satisfactory results. In the complex terrain case, there are some significant discrepancies with the measurements, which are discussed. In this case, the induction factor method does not provide satisfactory results.
Resumo:
inor actinides (MAs) transmutation is a main design objective of advanced nuclear systems such as generation IV Sodium Fast Reactors (SFRs). In advanced fuel cycles, MA contents in final high level waste packages are main contributors to short term heat production as well as to long-term radiotoxicity. Therefore, MA transmutation would have an impact on repository designs and would reduce the environment burden of nuclear energy. In order to predict such consequences Monte Carlo (MC) transport codes are used in reactor design tasks and they are important complements and references for routinely used deterministic computational tools. In this paper two promising Monte Carlo transport-coupled depletion codes, EVOLCODE and SERPENT, are used to examine the impact of MA burning strategies in a SFR core, 3600 MWth. The core concept proposal for MA loading in two configurations is the result of an optimization effort upon a preliminary reference design to reduce the reactivity insertion as a consequence of sodium voiding, one of the main concerns of this technology. The objective of this paper is double. Firstly, efficiencies of the two core configurations for MA transmutation are addressed and evaluated in terms of actinides mass changes and reactivity coefficients. Results are compared with those without MA loading. Secondly, a comparison of the two codes is provided. The discrepancies in the results are quantified and discussed.
Resumo:
La discontinuidad de Mohorovičić, más conocida simplemente como “Moho” constituye la superficie de separación entre los materiales rocosos menos densos de la corteza y los materiales rocosos más densos del manto, suponiendo estas capas de densidad constante del orden de 2.67 y 3.27 g/cm3, y es un contorno básico para cualquier estudio geofísico de la corteza terrestre. Los estudios sísmicos y gravimétricos realizados demuestran que la profundidad del Moho es del orden de 30-40 km por debajo de la Península Ibérica y 5-15 km bajo las zonas marinas. Además las distintas técnicas existentes muestran gran correlación en los resultados. Haciendo la suposición de que el campo de gravedad de la Península Ibérica (como le ocurre al 90% de la Tierra) está isostáticamente compensado por la variable profundidad del Moho, suponiendo un contraste de densidad constante entre la corteza y el manto y siguiendo el modelo isostático de Vening Meinesz (1931), se formula el problema isostático inverso para obtener tal profundidad a partir de la anomalía Bouguer de la gravedad calculada gracias a la gravedad observada en la superficie terrestre. La particularidad de este modelo es la compensación isostática regional de la que parte la teoría, que se asemeja a la realidad en mayor medida que otros modelos existentes, como el de Airy-Heiskanen, que ha sido históricamente el más utilizado en trabajos semejantes. Además, su solución está relacionada con el campo de gravedad global para toda la Tierra, por lo que los actuales modelos gravitacionales, la mayoría derivados de observaciones satelitales, deberían ser importantes fuentes de información para nuestra solución. El objetivo de esta tesis es el estudio con detalle de este método, desarrollado por Helmut Moritz en 1990, que desde entonces ha tenido poca evolución y seguidores y que nunca se ha puesto en práctica en la Península Ibérica. Después de tratar su teoría, desarrollo y aspectos computacionales, se está en posición de obtener un modelo digital del Moho para esta zona a fin de poder utilizarse para el estudio de la distribución de masas bajo la superficie terrestre. A partir de los datos del Moho obtenidos por métodos alternativos se hará una comparación. La precisión de ninguno de estos métodos es extremadamente alta (+5 km aproximadamente). No obstante, en aquellas zonas donde exista una discrepancia de datos significaría un área descompensada, con posibles movimientos tectónicos o alto grado de riesgo sísmico, lo que le da a este estudio un valor añadido. ABSTRACT The Mohorovičić discontinuity, simply known as “Moho” constitutes the division between the rocky and less thick materials of the mantle and the heavier ones in the crust, assuming densities of the orders of 2.67 y 3.27 g/cm3 respectively. It is also a basic contour for every geophysical kind of studies about the terrestrial crust. The seismic and previous gravimetric observations done in the study area show that the Moho depth is of the order of 30-40 km beneath the ground and 5-15 km under the ocean basin. Besides, the different techniques show a good correlation in their results. Assuming that the Iberian Peninsula gravity field (as it happens for the 90% of the Earth) is isostatically compensated according to the variable Moho depth, supposing a constant density contrast between crust and mantle, and following the isostatic Vening Meinesz model (1931), the inverse isostatic problem can be formulated from Bouguer gravity anomaly data obtained thanks to the observed gravity at the surface of the Earth. The main difference between this model and other existing ones, such as Airy- Heiskanen’s (pure local compensation and mostly used in these kinds of works) is the approaching to a regional isostatic compensation, much more in accordance with reality. Besides, its solution is related to the global gravity field, and the current gravitational models -mostly satellite derived- should be important data sources in such solution. The aim of this thesis is to study with detail this method, developed by Helmut Moritz in 1990, which hardly ever has it put into practice. Moreover, it has never been used in Iberia. After studying its theory, development and computational aspects, we are able to get a Digital Moho Model of the Iberian Peninsula, in order to study the masses distribution beneath the Earth’s surface. With the depth Moho information obtained from alternative methods, a comparison will be done. Both methods give results with the same order of accuracy, which is not quite high (+ 5 km approximately). Nevertheless, the areas in which a higher difference is observed would mean a disturbance of the compensation, which could show an unbalanced area with possible tectonic movements or potential seismic risk. It will give us an important additive value, which could be used in, at first, non related fields, such as density discrepancies or natural disasters contingency plans.
Resumo:
Today, designing informatics curricula is a major problem. As a technology, informatics is experiencing a dramatic evolution, with its rapidly expanding areas of application and everincreasing impact on society. At first glance, it seems that if we might need several different curricula for facing very different practical educational situations, and not just one or two curricula. The current deep discrepancies among some of our more prestigious computer scientists in relation to the focus of informatics education are in fact a form of recognition of this necessity and at the same time, proof we are at a turning point.