853 resultados para Compactness Compensated


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study investigated the combined effects of ocean acidification, temperature, and salinity on growth and test degradation of Ammonia aomoriensis. This species is one of the dominant benthic foraminifera in near-coastal habitats of the southwestern Baltic Sea that can be particularly sensitive to changes in seawater carbonate chemistry. To assess potential responses to ocean acidification and climate change, we performed a fully crossed experiment involving three temperatures (8, 13, and 18°C), three salinities (15, 20, and 25) and four pCO2 levels (566, 1195, 2108, and 3843 µatm) for six weeks. Our results highlight a sensitive response of A. aomoriensis to undersaturated seawater with respect to calcite. The specimens continued to grow and increase their test diameter in treatments with pCO2 <1200 µatm, when Omega calc >1. Growth rates declined when pCO2 exceeded 1200 µatm (Omega calc <1). A significant reduction in test diameter and number of tests due to dissolution was observed below a critical Omega calc of 0.5. Elevated temperature (18°C) led to increased Omega calc, larger test diameter, and lower test degradation. Maximal growth was observed at 18°C. No significant relationship was observed between salinity and test growth. Lowered and undersaturated Omega calc, which results from increasing pCO2 in bottom waters, may cause a significant future decline of the population density of A. aomoriensis in its natural environment. At the same time, this effect might be partially compensated by temperature rise due to global warming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean acidification is altering the oceanic carbonate saturation state and threatening the survival of marine calcifying organisms. Production of their calcium carbonate exoskeletons is dependent not only on the environmental seawater carbonate chemistry but also the ability to produce biominerals through proteins. We present shell growth and structural responses by the economically important marine calcifier Mytilus edulis to ocean acidification scenarios (380, 550, 750, 1000 µatm pCO2). After six months of incubation at 750 µatm pCO2, reduced carbonic anhydrase protein activity and shell growth occurs in M. edulis. Beyond that, at 1000 µatm pCO2, biomineralisation continued but with compensated metabolism of proteins and increased calcite growth. Mussel growth occurs at a cost to the structural integrity of the shell due to structural disorientation of calcite crystals. This loss of structural integrity could impact mussel shell strength and reduce protection from predators and changing environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Snow height was measured by the Snow Depth Buoy 2014S17, an autonomous platform, drifting on Antarctic sea ice, deployed during POLARSTERN cruise ANT-XXX/2 (PS89). The resulting time series describes the evolution of snow depth as a function of place and time between 2014-12-20 and 2015-02-01 in sample intervals of 1 hour. The Snow Depth Buoy consists of four independent sonar measurements representing the area (approx. 10 m**2) around the buoy. The buoy was installed on first year ice. In addition to snow depth, geographic position (GPS), barometric pressure, air temperature, and ice surface temperature were measured. Negative values of snow depth occur if surface ablation continues into the sea ice. Thus, these measurements describe the position of the sea ice surface relative to the original snow-ice interface. Differences between single sensors indicate small-scale variability of the snow pack around the buoy. The data set has been processed, including the removal of obvious inconsistencies (missing values). In this data set, diurnal variations occur in the data set, although the sonic readings were compensated for temperature changes. Records without any snow depth may still be used for sea ice drift analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To tackle global climate change, it is desirable to reduce CO2 emissions associated with household consumption in particular in developed countries, which tend to have much higher per capita household carbon footprints than less developed countries. Our results show that carbon intensity of different consumption categories in the U.S. varies significantly. The carbon footprint tends to increase with increasing income but at a decreasing rate due to additional income being spent on less carbon intensive consumption items. This general tendency is frequently compensated by higher frequency of international trips and higher housing related carbon emissions (larger houses and more space for consumption items). Our results also show that more than 30% of CO2 emissions associated with household consumption in the U.S. occur outside of the U.S. Given these facts, the design of carbon mitigation policies should take changing household consumption patterns and international trade into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work summarizes research related to the definition of nutrient recommendations for feeds used in the intensive production of rabbit's meat. Fibre is the main chemical constituent of rabbit diets that typically contain 320 to 360 and 50 to 90 g/kg of insoluble and soluble fibre, respectively. Instead, the dietary contents of cereal grains (∼120 to 160 g/kg), fat (15 to 25 g/kg) and protein concentrates (150 to 180 g/kg) are usually low with respect to other intensively reared monogastric animals. Cell wall constituents are not well digested in rabbits, but this effect is compensated by its stimulus of gut motility, which leads to an increasing rate of passage of digesta, and allows achieving an elevated dry matter intake. A high feed consumption and an adequate balance in essential nutrients are required to sustain the elevated needs of high-productive rabbits measured either as reproductive yield, milk production or growth rate in the fattening period. Around weaning, pathologies occur in a context of incomplete development of the digestive physiology of young rabbits. The supply of balanced diets has also been related to the prevention of disorders by means of three mechanisms: (i) promoting a lower retention time of the digesta in the digestive tract through feeding fibre sources with optimal chemical and physical characteristics, (ii) restricting feed intake after weaning or (iii) causing a lower flow of easily available substrates into the fermentative area by modifying feed composition (e.g. by lowering protein and starch contents, increasing its digestibility or partially substituting insoluble with soluble fibre), or by delaying age at weaning. The alteration in the gut microbiota composition has been postulated as the possible primary cause of these pathologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissolution and gettering of iron is studied during the final fabrication step of multicrystalline silicon solar cells, the co-firing step, through simulations and experiments. The post-processed interstitial iron concentration is simulated according to the as-grown concentration and distribution of iron within a silicon wafer, both in the presence and absence of the phosphorus emitter, and applying different time-temperature profiles for the firing step. The competing effects of dissolution and gettering during the short annealing process are found to be strongly dependant on the as-grown material quality. Furthermore, increasing the temperature of the firing process leads to a higher dissolution of iron, hardly compensated by the higher diffusivity of impurities. A new defect engineering tool is introduced, the extended co-firing, which could allow an enhanced gettering effect within a small additional time

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Fractal Image Informatics toolbox (Oleschko et al., 2008 a; Torres-Argüelles et al., 2010) was applied to extract, classify and model the topological structure and dynamics of surface roughness in two highly eroded catchments of Mexico. Both areas are affected by gully erosion (Sidorchuk, 2005) and characterized by avalanche-like matter transport. Five contrasting morphological patterns were distinguished across the slope of the bare eroded surface of Faeozem (Queretaro State) while only one (apparently independent on the slope) roughness pattern was documented for Andosol (Michoacan State). We called these patterns ?the roughness clusters? and compared them in terms of metrizability, continuity, compactness, topological connectedness (global and local) and invariance, separability, and degree of ramification (Weyl, 1937). All mentioned topological measurands were correlated with the variance, skewness and kurtosis of the gray-level distribution of digital images. The morphology0 spatial dynamics of roughness clusters was measured and mapped with high precision in terms of fractal descriptors. The Hurst exponent was especially suitable to distinguish between the structure of ?turtle shell? and ?ramification? patterns (sediment producing zone A of the slope); as well as ?honeycomb? (sediment transport zone B) and ?dinosaur steps? and ?corals? (sediment deposition zone C) roughness clusters. Some other structural attributes of studied patterns were also statistically different and correlated with the variance, skewness and kurtosis of gray distribution of multiscale digital images. The scale invariance of classified roughness patterns was documented inside the range of five image resolutions. We conjectured that the geometrization of erosion patterns in terms of roughness clustering might benefit the most semi-quantitative models developed for erosion and sediment yield assessments (de Vente and Poesen, 2005).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is easy to get frustrated at spoken conversational agents (SCAs), perhaps because they seem to be callous. By and large, the quality of human-computer interaction is affected due to the inability of the SCAs to recognise and adapt to user emotional state. Now with the mass appeal of artificially-mediated communication, there has been an increasing need for SCAs to be socially and emotionally intelligent, that is, to infer and adapt to their human interlocutors’ emotions on the fly, in order to ascertain an affective, empathetic and naturalistic interaction. An enhanced quality of interaction would reduce users’ frustrations and consequently increase their satisfactions. These reasons have motivated the development of SCAs towards including socio-emotional elements, turning them into affective and socially-sensitive interfaces. One barrier to the creation of such interfaces has been the lack of methods for modelling emotions in a task-independent environment. Most emotion models for spoken dialog systems are task-dependent and thus cannot be used “as-is” in different applications. This Thesis focuses on improving this, in which it concerns computational modeling of emotion, personality and their interrelationship for task-independent autonomous SCAs. The generation of emotion is driven by needs, inspired by human’s motivational systems. The work in this Thesis is organised in three stages, each one with its own contribution. The first stage involved defining, integrating and quantifying the psychological-based motivational and emotional models sourced from. Later these were transformed into a computational model by implementing them into software entities. The computational model was then incorporated and put to test with an existing SCA host, a HiFi-control agent. The second stage concerned automatic prediction of affect, which has been the main challenge towards the greater aim of infusing social intelligence into the HiFi agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. In this stage, we attempted to address part of this challenge by considering the roles of user satisfaction ratings and conversational/dialog features as the respective target and predictors in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. The final stage concerned the evaluation of the emotional model through the HiFi agent. A series of user studies with 70 subjects were conducted in a real-time environment, each in a different phase and with its own conditions. All the studies involved the comparisons between the baseline non-modified and the modified agent. The findings have gone some way towards enhancing our understanding of the utility of emotion in spoken dialog systems in several ways; first, an SCA should not express its emotions blindly, albeit positive. Rather, it should adapt its emotions to user states. Second, low performance in an SCA may be compensated by the exploitation of emotion. Third, the expression of emotion through the exploitation of prosody could better improve users’ perceptions of an SCA compared to exploiting emotions through just lexical contents. Taken together, these findings not only support the success of the emotional model, but also provide substantial evidences with respect to the benefits of adding emotion in an SCA, especially in mitigating users’ frustrations and ultimately improving their satisfactions. Resumen Es relativamente fácil experimentar cierta frustración al interaccionar con agentes conversacionales (Spoken Conversational Agents, SCA), a menudo porque parecen ser un poco insensibles. En general, la calidad de la interacción persona-agente se ve en cierto modo afectada por la incapacidad de los SCAs para identificar y adaptarse al estado emocional de sus usuarios. Actualmente, y debido al creciente atractivo e interés de dichos agentes, surge la necesidad de hacer de los SCAs unos seres cada vez más sociales y emocionalmente inteligentes, es decir, con capacidad para inferir y adaptarse a las emociones de sus interlocutores humanos sobre la marcha, de modo que la interacción resulte más afectiva, empática y, en definitiva, natural. Una interacción mejorada en este sentido permitiría reducir la posible frustración de los usuarios y, en consecuencia, mejorar el nivel de satisfacción alcanzado por los mismos. Estos argumentos justifican y motivan el desarrollo de nuevos SCAs con capacidades socio-emocionales, dotados de interfaces afectivas y socialmente sensibles. Una de las barreras para la creación de tales interfaces ha sido la falta de métodos de modelado de emociones en entornos independientes de tarea. La mayoría de los modelos emocionales empleados por los sistemas de diálogo hablado actuales son dependientes de tarea y, por tanto, no pueden utilizarse "tal cual" en diferentes dominios o aplicaciones. Esta tesis se centra precisamente en la mejora de este aspecto, la definición de modelos computacionales de las emociones, la personalidad y su interrelación para SCAs autónomos e independientes de tarea. Inspirada en los sistemas motivacionales humanos en el ámbito de la psicología, la tesis propone un modelo de generación/producción de la emoción basado en necesidades. El trabajo realizado en la presente tesis está organizado en tres etapas diferenciadas, cada una con su propia contribución. La primera etapa incluyó la definición, integración y cuantificación de los modelos motivacionales de partida y de los modelos emocionales derivados a partir de éstos. Posteriormente, dichos modelos emocionales fueron plasmados en un modelo computacional mediante su implementación software. Este modelo computacional fue incorporado y probado en un SCA anfitrión ya existente, un agente con capacidad para controlar un equipo HiFi, de alta fidelidad. La segunda etapa se orientó hacia el reconocimiento automático de la emoción, aspecto que ha constituido el principal desafío en relación al objetivo mayor de infundir inteligencia social en el agente HiFi. En los últimos años, los estudios sobre reconocimiento de emociones a partir de la voz han pasado de emplear datos actuados a usar datos reales en los que la presencia u observación de emociones se produce de una manera mucho más sutil. El reconocimiento de emociones bajo estas condiciones resulta mucho más complicado y esta dificultad se pone de manifiesto en tareas tales como el etiquetado y el aprendizaje automático. En esta etapa, se abordó el problema del reconocimiento de las emociones del usuario a partir de características o métricas derivadas del propio diálogo usuario-agente. Gracias a dichas métricas, empleadas como predictores o indicadores del grado o nivel de satisfacción alcanzado por el usuario, fue posible discriminar entre satisfacción y frustración, las dos emociones prevalentes durante la interacción usuario-agente. La etapa final corresponde fundamentalmente a la evaluación del modelo emocional por medio del agente Hifi. Con ese propósito se llevó a cabo una serie de estudios con usuarios reales, 70 sujetos, interaccionando con diferentes versiones del agente Hifi en tiempo real, cada uno en una fase diferente y con sus propias características o capacidades emocionales. En particular, todos los estudios realizados han profundizado en la comparación entre una versión de referencia del agente no dotada de ningún comportamiento o característica emocional, y una versión del agente modificada convenientemente con el modelo emocional propuesto. Los resultados obtenidos nos han permitido comprender y valorar mejor la utilidad de las emociones en los sistemas de diálogo hablado. Dicha utilidad depende de varios aspectos. En primer lugar, un SCA no debe expresar sus emociones a ciegas o arbitrariamente, incluso aunque éstas sean positivas. Más bien, debe adaptar sus emociones a los diferentes estados de los usuarios. En segundo lugar, un funcionamiento relativamente pobre por parte de un SCA podría compensarse, en cierto modo, dotando al SCA de comportamiento y capacidades emocionales. En tercer lugar, aprovechar la prosodia como vehículo para expresar las emociones, de manera complementaria al empleo de mensajes con un contenido emocional específico tanto desde el punto de vista léxico como semántico, ayuda a mejorar la percepción por parte de los usuarios de un SCA. Tomados en conjunto, los resultados alcanzados no sólo confirman el éxito del modelo emocional, sino xv que constituyen además una evidencia decisiva con respecto a los beneficios de incorporar emociones en un SCA, especialmente en cuanto a reducir el nivel de frustración de los usuarios y, en última instancia, mejorar su satisfacción.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A review of existing studies about LCA of PV systems has been carried out. The data from this review have been completed with our own figures in order to calculate the Energy Payback Time of double and horizontal axis tracking and fixed systems. The results of this metric span from 2 to 5 years for the latitude and global irradiation ranges of the geographical area comprised between −10◦ to 10◦ of longitude, and 30◦ to 45◦ of latitude. With the caution due to the uncertainty of the sources of information, these results mean that a GCPVS is able to produce back the energy required for its existence from 6 to 15 times during a life cycle of 30 years. When comparing tracking and fixed systems, the great importance of the PV generator makes advisable to dedicate more energy to some components of the system in order to increase the productivity and to obtain a higher performance of the component with the highest energy requirement. Both double axis and horizontal axis trackers follow this way, requiring more energy in metallic structure, foundations and wiring, but this higher contribution is widely compensated by the improved productivity of the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive hardware requires some reconfiguration capabilities. FPGAs with native dynamic partial reconfiguration (DPR) support pose a dilemma for system designers: whether to use native DPR or to build a virtual reconfigurable circuit (VRC) on top of the FPGA which allows selecting alternative functions by a multiplexing scheme. This solution allows much faster reconfiguration, but with higher resource overhead. This paper discusses the advantages of both implementations for a 2D image processing matrix. Results show how higher operating frequency is obtained for the matrix using DPR. However, this is compensated in the VRC during evolution due to the comparatively negligible reconfiguration time. Regarding area, the DPR implementation consumes slightly more resources due to the reconfiguration engine, but adds further more capabilities to the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resulta imposible disociar la evolución de la arquitectura de Enric Miralles de lo que fue el desarrollo de un sistema de representación propio. Partiendo de una posición heredada de su formación en la Escuela de Arquitectura de Barcelona y de su práctica en el estudio Viaplana-Piñón, donde adquiere el gusto por la precisión en el dibujo técnico, la delineación sobre papel vegetal o el grafismo constituido exclusivamente a base de líneas del mismo grosor, Miralles pronto evoluciona hacia un método caracterizado por un personal uso del sistema diédrico, vinculado a una concepción fragmentaria de la planta de arquitectura y del espacio mismo. Miralles proyectará por fragmentos de planta, asignándoles una geometría característica para diferenciarlos entre sí y desarrollar su espacialidad y sección con cierta autonomía, a través de planos y maquetas independientes. Gran parte de la arquitectura que elabora con Carme Pinós, en solitario o con Benedetta Tagliabue, estará compuesta por colecciones de piezas heterogéneas herederas de los fragmentos de la planta original, que encajan entre sí no en base a esquemas clásicos de integración subordinada o jerárquica, sino a través de posiciones relativas de yuxtaposición o superposición, caracterizadas por una ausencia de compacidad en la solución de conjunto. Este sistema de representación se apoya por tanto en la geometría como mecanismo de diferenciación por piezas, se basa en la fragmentación del diédrico desde la fragmentación de la planta, y en la falta de compacidad como soporte de pensamiento separativo. Un sistema que se define como “planta Miralles”, término que incluye todas las técnicas de representación empleadas por el arquitecto, desde planos a maquetas, pero que enfatiza la importancia estratégica de la planta como origen y guía del proyecto de arquitectura. La tesis se estructura en los tres primeros capítulos como un corolario de las categorías enunciadas, explicando, en orden cronológico a través de los proyectos, la evolución de la geometría, la utilización del diédrico, y el impacto de la falta de compacidad en la obra construida. Mientras que estos capítulos son globales, se refieren a la trayectoria de este método en su totalidad, el cuarto y último es un estudio de detalle de su aplicación en un proyecto particular, el Ayuntamiento de Utrecht, a través de los dibujos originales de Miralles. Tanto en la explicación global como en el estudio de detalle de este sistema de representación, la tesis pone de manifiesto su instrumentalidad en el pensamiento de esta arquitectura, argumentando que ésta no podría haber sido desarrollada sin la existencia del mismo. La relación entre representación y pensamiento es por tanto un tema capital para explicar esta obra. No obstante, hasta la fecha, las referencias al mismo en la bibliografía disponible no han pasado de ser una colección de opiniones dispersas, incapaces de construir por sí mismas un cuerpo estructurado y coherente de conocimiento. Se ha insistido sobremanera en el análisis y contextualización de los proyectos individuales, y poco en el estudio de la técnica proyectual utilizada para pensarlos y llevarlos a cabo. En definitiva, se han priorizado los resultados frente a los procesos creativos, existiendo por tanto un inexplicable vacío teórico respecto a un tema de gran importancia. Este vacío es el marco donde se inserta la necesidad de esta tesis doctoral. La investigación que aquí se presenta explica el origen y evolución del sistema de representación de Enric Miralles, desde su etapa como estudiante en la Escuela de Arquitectura de Barcelona hasta los últimos proyectos que elabora con Benedetta Tagliabue, así como el estudio de sus consecuencias en la obra construida. Termina concluyendo que su desarrollo es paralelo al de la arquitectura de Miralles, poniendo de manifiesto su vinculación y mutua interdependencia. ABSTRACT It is impossible to dissociate the evolution of the architecture of Enric Miralles from the development of his own system of representation. Starting from a position inherited from his training at the Barcelona School of Architecture and his practice at the office of Viaplana-Piñón, where he acquires a liking for precision in drafting and a graphic style based exclusively on lines of the same thickness, Miralles soon moves into a method defined by a customized use of the dihedral system, connected to a fragmented conception of the floorplan and space itself. Breaking up the floorplan into multiple fragments, Miralles will design an architecture where each of them has a unique shape and geometry, developing their sections and spatial qualities with a certain degree of autonomy within the whole, through separate plans and models. Many of the projects he designs with Carme Pinós, individually or with Benedetta Tagliabue, will consist of collections of heterogeneous pieces, heirs of the original floorplan fragments, which do not fit together according to classical principles of subordinate or hierarchical integration, but based on relative positions of juxtaposition or superposition that lead to a lack of compactness in the overall scheme. This system of representation is thus based on the use of geometry as a way of differentiating architectural pieces, on the fragmentation of the dihedral system from the fragmentation of the floorplan, and on a lack of compactness as a device of separative thinking. This system is defined as “Miralles plan”, a term that includes all techniques of representation used by the architect, from plans to models, and that emphasizes the particular importance of the floorplan as the guiding force of the design process. The first three chapters of the thesis have been structured as a corollary of these categories, explaining, in chronological order through Miralles’ projects, the evolution of geometry, the customization of the dihedral system, and the impact of the lack of compactness on the built work. While these three chapters are global, for they refer to the overall evolution of this system, the fourth and last one is a case study of its application to a particular project, the Utrecht Town Hall, through Miralles’ original drawings. Both in the global and particular explanations of this system of representation, the thesis highlights its instrumentality in the process of thinking this architecture, arguing that it could not have been designed without its parallel development. The relationship between thinking and representation is therefore a key issue to explain this architecture. However, to date, existing references to it in the available literature have not evolved from a collection of scattered opinions, unable to build for themselves a structured and coherent body of knowledge. Great emphasis has been put on the critical contextualization of this architecture through the analysis of the projects themselves, but little on the study of the design technique used to think and carry them out. Results have been prioritized over creative processes, existing therefore an inexplicable theoretical void on an issue of great importance. This void is the conceptual framework where the need for this thesis is inserted. This research explains the origin and evolution of Enric Miralles’ system of representation, from his time as student at the Barcelona School of Architecture to the last projects he designed with Benedetta Tagliabue, as well as the study of its impact on the built work. It concludes that the development of this system runs parallel to that of the architecture it is used for, making it explicit its indissolubility and mutual interdependence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La discontinuidad de Mohorovičić, más conocida simplemente como “Moho” constituye la superficie de separación entre los materiales rocosos menos densos de la corteza y los materiales rocosos más densos del manto, suponiendo estas capas de densidad constante del orden de 2.67 y 3.27 g/cm3, y es un contorno básico para cualquier estudio geofísico de la corteza terrestre. Los estudios sísmicos y gravimétricos realizados demuestran que la profundidad del Moho es del orden de 30-40 km por debajo de la Península Ibérica y 5-15 km bajo las zonas marinas. Además las distintas técnicas existentes muestran gran correlación en los resultados. Haciendo la suposición de que el campo de gravedad de la Península Ibérica (como le ocurre al 90% de la Tierra) está isostáticamente compensado por la variable profundidad del Moho, suponiendo un contraste de densidad constante entre la corteza y el manto y siguiendo el modelo isostático de Vening Meinesz (1931), se formula el problema isostático inverso para obtener tal profundidad a partir de la anomalía Bouguer de la gravedad calculada gracias a la gravedad observada en la superficie terrestre. La particularidad de este modelo es la compensación isostática regional de la que parte la teoría, que se asemeja a la realidad en mayor medida que otros modelos existentes, como el de Airy-Heiskanen, que ha sido históricamente el más utilizado en trabajos semejantes. Además, su solución está relacionada con el campo de gravedad global para toda la Tierra, por lo que los actuales modelos gravitacionales, la mayoría derivados de observaciones satelitales, deberían ser importantes fuentes de información para nuestra solución. El objetivo de esta tesis es el estudio con detalle de este método, desarrollado por Helmut Moritz en 1990, que desde entonces ha tenido poca evolución y seguidores y que nunca se ha puesto en práctica en la Península Ibérica. Después de tratar su teoría, desarrollo y aspectos computacionales, se está en posición de obtener un modelo digital del Moho para esta zona a fin de poder utilizarse para el estudio de la distribución de masas bajo la superficie terrestre. A partir de los datos del Moho obtenidos por métodos alternativos se hará una comparación. La precisión de ninguno de estos métodos es extremadamente alta (+5 km aproximadamente). No obstante, en aquellas zonas donde exista una discrepancia de datos significaría un área descompensada, con posibles movimientos tectónicos o alto grado de riesgo sísmico, lo que le da a este estudio un valor añadido. ABSTRACT The Mohorovičić discontinuity, simply known as “Moho” constitutes the division between the rocky and less thick materials of the mantle and the heavier ones in the crust, assuming densities of the orders of 2.67 y 3.27 g/cm3 respectively. It is also a basic contour for every geophysical kind of studies about the terrestrial crust. The seismic and previous gravimetric observations done in the study area show that the Moho depth is of the order of 30-40 km beneath the ground and 5-15 km under the ocean basin. Besides, the different techniques show a good correlation in their results. Assuming that the Iberian Peninsula gravity field (as it happens for the 90% of the Earth) is isostatically compensated according to the variable Moho depth, supposing a constant density contrast between crust and mantle, and following the isostatic Vening Meinesz model (1931), the inverse isostatic problem can be formulated from Bouguer gravity anomaly data obtained thanks to the observed gravity at the surface of the Earth. The main difference between this model and other existing ones, such as Airy- Heiskanen’s (pure local compensation and mostly used in these kinds of works) is the approaching to a regional isostatic compensation, much more in accordance with reality. Besides, its solution is related to the global gravity field, and the current gravitational models -mostly satellite derived- should be important data sources in such solution. The aim of this thesis is to study with detail this method, developed by Helmut Moritz in 1990, which hardly ever has it put into practice. Moreover, it has never been used in Iberia. After studying its theory, development and computational aspects, we are able to get a Digital Moho Model of the Iberian Peninsula, in order to study the masses distribution beneath the Earth’s surface. With the depth Moho information obtained from alternative methods, a comparison will be done. Both methods give results with the same order of accuracy, which is not quite high (+ 5 km approximately). Nevertheless, the areas in which a higher difference is observed would mean a disturbance of the compensation, which could show an unbalanced area with possible tectonic movements or potential seismic risk. It will give us an important additive value, which could be used in, at first, non related fields, such as density discrepancies or natural disasters contingency plans.