972 resultados para minimal occlusive volume technique


Relevância:

30.00% 30.00%

Publicador:

Resumo:

RESUMEN La dispersión del amoniaco (NH3) emitido por fuentes agrícolas en medias distancias, y su posterior deposición en el suelo y la vegetación, pueden llevar a la degradación de ecosistemas vulnerables y a la acidificación de los suelos. La deposición de NH3 suele ser mayor junto a la fuente emisora, por lo que los impactos negativos de dichas emisiones son generalmente mayores en esas zonas. Bajo la legislación comunitaria, varios estados miembros emplean modelos de dispersión inversa para estimar los impactos de las emisiones en las proximidades de las zonas naturales de especial conservación. Una revisión reciente de métodos para evaluar impactos de NH3 en distancias medias recomendaba la comparación de diferentes modelos para identificar diferencias importantes entre los métodos empleados por los distintos países de la UE. En base a esta recomendación, esta tesis doctoral compara y evalúa las predicciones de las concentraciones atmosféricas de NH3 de varios modelos bajo condiciones, tanto reales como hipotéticas, que plantean un potencial impacto sobre ecosistemas (incluidos aquellos bajo condiciones de clima Mediterráneo). En este sentido, se procedió además a la comparación y evaluación de varias técnicas de modelización inversa para inferir emisiones de NH3. Finalmente, se ha desarrollado un modelo matemático simple para calcular las concentraciones de NH3 y la velocidad de deposición de NH3 en ecosistemas vulnerables cercanos a una fuente emisora. La comparativa de modelos supuso la evaluación de cuatro modelos de dispersión (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 y LADD v2010) en un amplio rango de casos hipotéticos (dispersión de NH3 procedente de distintos tipos de fuentes agrícolas de emisión). La menor diferencia entre las concentraciones medias estimadas por los distintos modelos se obtuvo para escenarios simples. La convergencia entre las predicciones de los modelos fue mínima para el escenario relativo a la dispersión de NH3 procedente de un establo ventilado mecánicamente. En este caso, el modelo ADMS predijo concentraciones significativamente menores que los otros modelos. Una explicación de estas diferencias podríamos encontrarla en la interacción de diferentes “penachos” y “capas límite” durante el proceso de parametrización. Los cuatro modelos de dispersión fueron empleados para dos casos reales de dispersión de NH3: una granja de cerdos en Falster (Dinamarca) y otra en Carolina del Norte (EEUU). Las concentraciones medias anuales estimadas por los modelos fueron similares para el caso americano (emisión de granjas ventiladas de forma natural y balsa de purines). La comparación de las predicciones de los modelos con concentraciones medias anuales medidas in situ, así como la aplicación de los criterios establecidos para la aceptación estadística de los modelos, permitió concluir que los cuatro modelos se comportaron aceptablemente para este escenario. No ocurrió lo mismo en el caso danés (nave ventilada mecánicamente), en donde el modelo LADD no dio buenos resultados debido a la ausencia de procesos de “sobreelevacion de penacho” (plume-rise). Los modelos de dispersión dan a menudo pobres resultados en condiciones de baja velocidad de viento debido a que la teoría de dispersión en la que se basan no es aplicable en estas condiciones. En situaciones de frecuente descenso en la velocidad del viento, la actual guía de modelización propone usar un modelo que sea eficaz bajo dichas condiciones, máxime cuando se realice una valoración que tenga como objeto establecer una política de regularización. Esto puede no ser siempre posible debido a datos meteorológicos insuficientes, en cuyo caso la única opción sería utilizar un modelo más común, como la versión avanzada de los modelos Gausianos ADMS o AERMOD. Con el objetivo de evaluar la idoneidad de estos modelos para condiciones de bajas velocidades de viento, ambos modelos fueron utilizados en un caso con condiciones Mediterráneas. Lo que supone sucesivos periodos de baja velocidad del viento. El estudio se centró en la dispersión de NH3 procedente de una granja de cerdos en Segovia (España central). Para ello la concentración de NH3 media mensual fue medida en 21 localizaciones en torno a la granja. Se realizaron también medidas de concentración de alta resolución en una única localización durante una campaña de una semana. En este caso, se evaluaron dos estrategias para mejorar la respuesta del modelo ante bajas velocidades del viento. La primera se basó en “no zero wind” (NZW), que sustituyó periodos de calma con el mínimo límite de velocidad del viento y “accumulated calm emissions” (ACE), que forzaban al modelo a calcular las emisiones totales en un periodo de calma y la siguiente hora de no-calma. Debido a las importantes incertidumbres en los datos de entrada del modelo (inputs) (tasa de emisión de NH3, velocidad de salida de la fuente, parámetros de la capa límite, etc.), se utilizó el mismo caso para evaluar la incertidumbre en la predicción del modelo y valorar como dicha incertidumbre puede ser considerada en evaluaciones del modelo. Un modelo dinámico de emisión, modificado para el caso de clima Mediterráneo, fue empleado para estimar la variabilidad temporal en las emisiones de NH3. Así mismo, se realizó una comparativa utilizando las emisiones dinámicas y la tasa constante de emisión. La incertidumbre predicha asociada a la incertidumbre de los inputs fue de 67-98% del valor medio para el modelo ADMS y entre 53-83% del valor medio para AERMOD. La mayoría de esta incertidumbre se debió a la incertidumbre del ratio de emisión en la fuente (50%), seguida por la de las condiciones meteorológicas (10-20%) y aquella asociada a las velocidades de salida (5-10%). El modelo AERMOD predijo mayores concentraciones que ADMS y existieron más simulaciones que alcanzaron los criterios de aceptabilidad cuando se compararon las predicciones con las concentraciones medias anuales medidas. Sin embargo, las predicciones del modelo ADMS se correlacionaron espacialmente mejor con las mediciones. El uso de valores dinámicos de emisión estimados mejoró el comportamiento de ADMS, haciendo empeorar el de AERMOD. La aplicación de estrategias destinadas a mejorar el comportamiento de este último tuvo efectos contradictorios similares. Con el objeto de comparar distintas técnicas de modelización inversa, varios modelos (ADMS, LADD y WindTrax) fueron empleados para un caso no agrícola, una colonia de pingüinos en la Antártida. Este caso fue empleado para el estudio debido a que suponía la oportunidad de obtener el primer factor de emisión experimental para una colonia de pingüinos antárticos. Además las condiciones eran propicias desde el punto de vista de la casi total ausencia de concentraciones ambiente (background). Tras el trabajo de modelización existió una concordancia suficiente entre las estimaciones obtenidas por los tres modelos. De este modo se pudo definir un factor de emisión de para la colonia de 1.23 g NH3 por pareja criadora por día (con un rango de incertidumbre de 0.8-2.54 g NH3 por pareja criadora por día). Posteriores aplicaciones de técnicas de modelización inversa para casos agrícolas mostraron también un buen compromiso estadístico entre las emisiones estimadas por los distintos modelos. Con todo ello, es posible concluir que la modelización inversa es una técnica robusta para estimar tasas de emisión de NH3. Modelos de selección (screening) permiten obtener una rápida y aproximada estimación de los impactos medioambientales, siendo una herramienta útil para evaluaciones de impactos en tanto que permite eliminar casos que presentan un riesgo potencial de daño bajo. De esta forma, lo recursos del modelo pueden Resumen (Castellano) destinarse a casos en donde la posibilidad de daño es mayor. El modelo de Cálculo Simple de los Límites de Impacto de Amoniaco (SCAIL) se desarrolló para obtener una estimación de la concentración media de NH3 y de la tasa de deposición seca asociadas a una fuente agrícola. Está técnica de selección, basada en el modelo LADD, fue evaluada y calibrada con diferentes bases de datos y, finalmente, validada utilizando medidas independientes de concentraciones realizadas cerca de las fuentes. En general SCAIL dio buenos resultados de acuerdo a los criterios estadísticos establecidos. Este trabajo ha permitido definir situaciones en las que las concentraciones predichas por modelos de dispersión son similares, frente a otras en las que las predicciones difieren notablemente entre modelos. Algunos modelos nos están diseñados para simular determinados escenarios en tanto que no incluyen procesos relevantes o están más allá de los límites de su aplicabilidad. Un ejemplo es el modelo LADD que no es aplicable en fuentes con velocidad de salida significativa debido a que no incluye una parametrización de sobreelevacion del penacho. La evaluación de un esquema simple combinando la sobreelevacion del penacho y una turbulencia aumentada en la fuente mejoró el comportamiento del modelo. Sin embargo más pruebas son necesarias para avanzar en este sentido. Incluso modelos que son aplicables y que incluyen los procesos relevantes no siempre dan similares predicciones. Siendo las razones de esto aún desconocidas. Por ejemplo, AERMOD predice mayores concentraciones que ADMS para dispersión de NH3 procedente de naves de ganado ventiladas mecánicamente. Existe evidencia que sugiere que el modelo ADMS infraestima concentraciones en estas situaciones debido a un elevado límite de velocidad de viento. Por el contrario, existen evidencias de que AERMOD sobreestima concentraciones debido a sobreestimaciones a bajas Resumen (Castellano) velocidades de viento. Sin embrago, una modificación simple del pre-procesador meteorológico parece mejorar notablemente el comportamiento del modelo. Es de gran importancia que estas diferencias entre las predicciones de los modelos sean consideradas en los procesos de evaluación regulada por los organismos competentes. Esto puede ser realizado mediante la aplicación del modelo más útil para cada caso o, mejor aún, mediante modelos múltiples o híbridos. ABSTRACT Short-range atmospheric dispersion of ammonia (NH3) emitted by agricultural sources and its subsequent deposition to soil and vegetation can lead to the degradation of sensitive ecosystems and acidification of the soil. Atmospheric concentrations and dry deposition rates of NH3 are generally highest near the emission source and so environmental impacts to sensitive ecosystems are often largest at these locations. Under European legislation, several member states use short-range atmospheric dispersion models to estimate the impact of ammonia emissions on nearby designated nature conservation sites. A recent review of assessment methods for short-range impacts of NH3 recommended an intercomparison of the different models to identify whether there are notable differences to the assessment approaches used in different European countries. Based on this recommendation, this thesis compares and evaluates the atmospheric concentration predictions of several models used in these impact assessments for various real and hypothetical scenarios, including Mediterranean meteorological conditions. In addition, various inverse dispersion modelling techniques for the estimation of NH3 emissions rates are also compared and evaluated and a simple screening model to calculate the NH3 concentration and dry deposition rate at a sensitive ecosystem located close to an NH3 source was developed. The model intercomparison evaluated four atmospheric dispersion models (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 and LADD v2010) for a range of hypothetical case studies representing the atmospheric dispersion from several agricultural NH3 source types. The best agreement between the mean annual concentration predictions of the models was found for simple scenarios with area and volume sources. The agreement between the predictions of the models was worst for the scenario representing the dispersion from a mechanically ventilated livestock house, for which ADMS predicted significantly smaller concentrations than the other models. The reason for these differences appears to be due to the interaction of different plume-rise and boundary layer parameterisations. All four dispersion models were applied to two real case studies of dispersion of NH3 from pig farms in Falster (Denmark) and North Carolina (USA). The mean annual concentration predictions of the models were similar for the USA case study (emissions from naturally ventilated pig houses and a slurry lagoon). The comparison of model predictions with mean annual measured concentrations and the application of established statistical model acceptability criteria concluded that all four models performed acceptably for this case study. This was not the case for the Danish case study (mechanically ventilated pig house) for which the LADD model did not perform acceptably due to the lack of plume-rise processes in the model. Regulatory dispersion models often perform poorly in low wind speed conditions due to the model dispersion theory being inapplicable at low wind speeds. For situations with frequent low wind speed periods, current modelling guidance for regulatory assessments is to use a model that can handle these conditions in an acceptable way. This may not always be possible due to insufficient meteorological data and so the only option may be to carry out the assessment using a more common regulatory model, such as the advanced Gaussian models ADMS or AERMOD. In order to assess the suitability of these models for low wind conditions, they were applied to a Mediterranean case study that included many periods of low wind speed. The case study was the dispersion of NH3 emitted by a pig farm in Segovia, Central Spain, for which mean monthly atmospheric NH3 concentration measurements were made at 21 locations surrounding the farm as well as high-temporal-resolution concentration measurements at one location during a one-week campaign. Two strategies to improve the model performance for low wind speed conditions were tested. These were ‘no zero wind’ (NZW), which replaced calm periods with the minimum threshold wind speed of the model and ‘accumulated calm emissions’ (ACE), which forced the model to emit the total emissions during a calm period during the first subsequent non-calm hour. Due to large uncertainties in the model input data (NH3 emission rates, source exit velocities, boundary layer parameters), the case study was also used to assess model prediction uncertainty and assess how this uncertainty can be taken into account in model evaluations. A dynamic emission model modified for the Mediterranean climate was used to estimate the temporal variability in NH3 emission rates and a comparison was made between the simulations using the dynamic emissions and a constant emission rate. Prediction uncertainty due to model input uncertainty was 67-98% of the mean value for ADMS and between 53-83% of the mean value for AERMOD. Most of this uncertainty was due to source emission rate uncertainty (~50%), followed by uncertainty in the meteorological conditions (~10-20%) and uncertainty in exit velocities (~5-10%). AERMOD predicted higher concentrations than ADMS and more of the simulations met the model acceptability criteria when compared with the annual mean measured concentrations. However, the ADMS predictions were better correlated spatially with the measurements. The use of dynamic emission estimates improved the performance of ADMS but worsened the performance of AERMOD and the application of strategies to improved model performance had similar contradictory effects. In order to compare different inverse modelling techniques, several models (ADMS, LADD and WindTrax) were applied to a non-agricultural case study of a penguin colony in Antarctica. This case study was used since it gave the opportunity to provide the first experimentally-derived emission factor for an Antarctic penguin colony and also had the advantage of negligible background concentrations. There was sufficient agreement between the emission estimates obtained from the three models to define an emission factor for the penguin colony (1.23 g NH3 per breeding pair per day with an uncertainty range of 0.8-2.54 g NH3 per breeding pair per day). This emission estimate compared favourably to the value obtained using a simple micrometeorological technique (aerodynamic gradient) of 0.98 g ammonia per breeding pair per day (95% confidence interval: 0.2-2.4 g ammonia per breeding pair per day). Further application of the inverse modelling techniques for a range of agricultural case studies also demonstrated good agreement between the emission estimates. It is concluded, therefore, that inverse dispersion modelling is a robust technique for estimating NH3 emission rates. Screening models that can provide a quick and approximate estimate of environmental impacts are a useful tool for impact assessments because they can be used to filter out cases that potentially have a minimal environmental impact allowing resources to be focussed on more potentially damaging cases. The Simple Calculation of Ammonia Impact Limits (SCAIL) model was developed as a screening model to provide an estimate of the mean NH3 concentration and dry deposition rate downwind of an agricultural source. This screening tool, based on the LADD model, was evaluated and calibrated with several experimental datasets and then validated using independent concentration measurements made near sources. Overall SCAIL performed acceptably according to established statistical criteria. This work has identified situations where the concentration predictions of dispersion models are similar and other situations where the predictions are significantly different. Some models are simply not designed to simulate certain scenarios since they do not include the relevant processes or are beyond the limits of their applicability. An example is the LADD model that is not applicable to sources with significant exit velocity since the model does not include a plume-rise parameterisation. The testing of a simple scheme combining a momentum-driven plume rise and increased turbulence at the source improved model performance, but more testing is required. Even models that are applicable and include the relevant process do not always give similar predictions and the reasons for this need to be investigated. AERMOD for example predicts higher concentrations than ADMS for dispersion from mechanically ventilated livestock housing. There is evidence to suggest that ADMS underestimates concentrations in these situations due to a high wind speed threshold. Conversely, there is also evidence that AERMOD overestimates concentrations in these situations due to overestimation at low wind speeds. However, a simple modification to the meteorological pre-processor appears to improve the performance of the model. It is important that these differences between the predictions of these models are taken into account in regulatory assessments. This can be done by applying the most suitable model for the assessment in question or, better still, using multiple or hybrid models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Palatal clicks are most interesting for human echolocation. Moreover, these sounds are suitable for other acoustic applications due to their regular mathematical properties and reproducibility. Simple and nondestructive techniques, bioinspired by synthetized pulses whose form reproduces the best features of palatal clicks, can be developed. The use of synthetic palatal pulses also allows detailed studies of the real possibilities of acoustic human echolocation without the problems associated with subjective individual differences. These techniques are being applied to the study of wood. As an example, a comparison of the performance of both natural and synthetic human echolocation to identify three different species of wood is presented. The results show that human echolocation has a vast potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tabled evaluation has been proved an effective method to improve several aspeets of goal-oriented query evaluation, including termination and complexity. Several "native" implementations of tabled evaluation have been developed which offer good performance, but many of them need significant changes to the underlying Prolog implementation. More portable approaches, generally using program transformation, have been proposed but they often result in lower efficieney. We explore some techniques aimed at combining the best of these worlds, i.e., developing a portable and extensible implementation, with minimal modifications at the abstract machine level, and with reasonably good performance. Our preliminary results indícate promising results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, granularity analysis and selection among different algorithms or control rules whose performance may be dependent on such size. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and applications of our technique and present some performance results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo final de las investigaciones recogidas en esta tesis doctoral es la estimación del volumen de hielo total de los ms de 1600 glaciares de Svalbard, en el Ártico, y, con ello, su contribución potencial a la subida del nivel medio del mar en un escenario de calentamiento global. Los cálculos más exactos del volumen de un glaciar se efectúan a partir de medidas del espesor de hielo obtenidas con georradar. Sin embargo, estas medidas no son viables para conjuntos grandes de glaciares, debido al coste, dificultades logísticas y tiempo requerido por ellas, especialmente en las regiones polares o de montaña. Frente a ello, la determinación de áreas de glaciares a partir de imágenes de satélite sí es viable a escalas global y regional, por lo que las relaciones de escala volumen-área constituyen el mecanismo más adecuado para las estimaciones de volúmenes globales y regionales, como las realizadas para Svalbard en esta tesis. Como parte del trabajo de tesis, hemos elaborado un inventario de los glaciares de Svalbard en los que se han efectuado radioecosondeos, y hemos realizado los cálculos del volumen de hielo de más de 80 cuencas glaciares de Svalbard a partir de datos de georradar. Estos volúmenes han sido utilizados para calibrar las relaciones volumen-área desarrolladas en la tesis. Los datos de georradar han sido obtenidos en diversas campañas llevadas a cabo por grupos de investigación internacionales, gran parte de ellas lideradas por el Grupo de Simulación Numérica en Ciencias e Ingeniería de la Universidad Politécnica de Madrid, del que forman parte la doctoranda y los directores de tesis. Además, se ha desarrollado una metodología para la estimación del error en el cálculo de volumen, que aporta una novedosa técnica de cálculo del error de interpolación para conjuntos de datos del tipo de los obtenidos con perfiles de georradar, que presentan distribuciones espaciales con unos patrones muy característicos pero con una densidad de datos muy irregular. Hemos obtenido en este trabajo de tesis relaciones de escala específicas para los glaciares de Svalbard, explorando la sensibilidad de los parámetros a diferentes morfologías glaciares, e incorporando nuevas variables. En particular, hemos efectuado experimentos orientados a verificar si las relaciones de escala obtenidas caracterizando los glaciares individuales por su tamaño, pendiente o forma implican diferencias significativas en el volumen total estimado para los glaciares de Svalbard, y si esta partición implica algún patrón significativo en los parámetros de las relaciones de escala. Nuestros resultados indican que, para un valor constante del factor multiplicativo de la relacin de escala, el exponente que afecta al área en la relación volumen-área decrece según aumentan la pendiente y el factor de forma, mientras que las clasificaciones basadas en tamaño no muestran un patrón significativo. Esto significa que los glaciares con mayores pendientes y de tipo circo son menos sensibles a los cambios de área. Además, los volúmenes de la población total de los glaciares de Svalbard calculados con fraccionamiento en grupos por tamaño y pendiente son un 1-4% menores que los obtenidas usando la totalidad de glaciares sin fraccionamiento en grupos, mientras que los volúmenes calculados fraccionando por forma son un 3-5% mayores. También realizamos experimentos multivariable para obtener estimaciones óptimas del volumen total mediante una combinación de distintos predictores. Nuestros resultados muestran que un modelo potencial simple volumen-área explica el 98.6% de la varianza. Sólo el predictor longitud del glaciar proporciona significación estadística cuando se usa además del área del glaciar, aunque el coeficiente de determinación disminuye en comparación con el modelo más simple V-A. El predictor intervalo de altitud no proporciona información adicional cuando se usa además del área del glaciar. Nuestras estimaciones del volumen de la totalidad de glaciares de Svalbard usando las diferentes relaciones de escala obtenidas en esta tesis oscilan entre 6890 y 8106 km3, con errores relativos del orden de 6.6-8.1%. El valor medio de nuestras estimaciones, que puede ser considerado como nuestra mejor estimación del volumen, es de 7.504 km3. En términos de equivalente en nivel del mar (SLE), nuestras estimaciones corresponden a una subida potencial del nivel del mar de 17-20 mm SLE, promediando 19_2 mm SLE, donde el error corresponde al error en volumen antes indicado. En comparación, las estimaciones usando las relaciones V-A de otros autores son de 13-26 mm SLE, promediando 20 _ 2 mm SLE, donde el error representa la desviación estándar de las distintas estimaciones. ABSTRACT The final aim of the research involved in this doctoral thesis is the estimation of the total ice volume of the more than 1600 glaciers of Svalbard, in the Arctic region, and thus their potential contribution to sea-level rise under a global warming scenario. The most accurate calculations of glacier volumes are those based on ice-thicknesses measured by groundpenetrating radar (GPR). However, such measurements are not viable for very large sets of glaciers, due to their cost, logistic difficulties and time requirements, especially in polar or mountain regions. On the contrary, the calculation of glacier areas from satellite images is perfectly viable at global and regional scales, so the volume-area scaling relationships are the most useful tool to determine glacier volumes at global and regional scales, as done for Svalbard in this PhD thesis. As part of the PhD work, we have compiled an inventory of the radio-echo sounded glaciers in Svalbard, and we have performed the volume calculations for more than 80 glacier basins in Svalbard from GPR data. These volumes have been used to calibrate the volume-area relationships derived in this dissertation. Such GPR data have been obtained during fieldwork campaigns carried out by international teams, often lead by the Group of Numerical Simulation in Science and Engineering of the Technical University of Madrid, to which the PhD candidate and her supervisors belong. Furthermore, we have developed a methodology to estimate the error in the volume calculation, which includes a novel technique to calculate the interpolation error for data sets of the type produced by GPR profiling, which show very characteristic data distribution patterns but with very irregular data density. We have derived in this dissertation scaling relationships specific for Svalbard glaciers, exploring the sensitivity of the scaling parameters to different glacier morphologies and adding new variables. In particular, we did experiments aimed to verify whether scaling relationships obtained through characterization of individual glacier shape, slope and size imply significant differences in the estimated volume of the total population of Svalbard glaciers, and whether this partitioning implies any noticeable pattern in the scaling relationship parameters. Our results indicate that, for a fixed value of the factor in the scaling relationship, the exponent of the area in the volume-area relationship decreases as slope and shape increase, whereas size-based classifications do not reveal any clear trend. This means that steep slopes and cirque-type glaciers are less sensitive to changes in glacier area. Moreover, the volumes of the total population of Svalbard glaciers calculated according to partitioning in subgroups by size and slope are smaller (by 1-4%) than that obtained considering all glaciers without partitioning into subgroups, whereas the volumes calculated according to partitioning in subgroups by shape are 3-5% larger. We also did multivariate experiments attempting to optimally predict the volume of Svalbard glaciers from a combination of different predictors. Our results show that a simple power-type V-A model explains 98.6% of the variance. Only the predictor glacier length provides statistical significance when used in addition to the predictor glacier area, though the coefficient of determination decreases as compared with the simpler V-A model. The predictor elevation range did not provide any additional information when used in addition to glacier area. Our estimates of the volume of the entire population of Svalbard glaciers using the different scaling relationships that we have derived along this thesis range within 6890-8106 km3, with estimated relative errors in total volume of the order of 6.6-8.1% The average value of all of our estimates, which could be used as a best estimate for the volume, is 7,504 km3. In terms of sea-level equivalent (SLE), our volume estimates correspond to a potential contribution to sea-level rise within 17-20 mm SLE, averaging 19 _ 2 mm SLE, where the quoted error corresponds to our estimated relative error in volume. For comparison, the estimates using the V-A scaling relations found in the literature range within 13-26 mm SLE, averaging 20 _ 2 mm SLE, where the quoted error represents the standard deviation of the different estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the influence of an axial microgravity on the minimum volume stability limit of axisymmetric liquid bridges between unequal disks is analyzed both theoretically and experimentally. The results here presented extend the knowledge of the static behaviour of liquid bridges to fluid configurations different from those studied up to now (almost equal disks). Experimental results, obtained by simulating microgravity conditions by the neutral buoyancy technique, are also presented and are shown to be in complete agreement with theoretical ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se ha desarrollado en el contexto del proyecto Cajal Blue Brain, una iniciativa europea dedicada al estudio del cerebro. Uno de los objetivos de esta iniciativa es desarrollar nuevos métodos y nuevas tecnologías que simplifiquen el análisis de datos en el campo neurocientífico. El presente trabajo se ha centrado en diseñar herramientas que combinen información proveniente de distintos canales sensoriales con el fin de acelerar la interacción y análisis de imágenes neurocientíficas. En concreto se estudiará la posibilidad de combinar información visual con información háptica. Las espinas dendríticas son pequeñas protuberancias que recubren la superficie dendrítica de muchas neuronas del cerebro. A día de hoy, se cree que tienen un papel clave en la transmisión de señales neuronales. Motivo por el cual, el interés por parte de la comunidad científica por estas estructuras ha ido en aumento a medida que las técnicas de adquisición de imágenes mejoraban hasta alcanzar una calidad suficiente para analizar dichas estructuras. A menudo, los neurocientíficos utilizan técnicas de microscopía con luz para obtener los datos que les permitan analizar estructuras neuronales tales como neuronas, dendritas y espinas dendríticas. A pesar de que estas técnicas ofrezcan ciertas ventajas frente a su equivalente electrónico, las técnicas basadas en luz permiten una menor resolución. En particular, estructuras pequeñas como las espinas dendríticas pueden capturarse de forma incorrecta en las imágenes obtenidas, impidiendo su análisis. En este trabajo, se presenta una nueva técnica, que permite editar imágenes volumétricas, mediante un dispositivo háptico, con el fin de reconstruir de los cuellos de las espinas dendríticas. Con este objetivo, en un primer momento se desarrolló un algoritmo que proporciona retroalimentación háptica en datos volumétricos, completando la información que provine del canal visual. Dicho algoritmo de renderizado háptico permite a los usuarios tocar y percibir una isosuperficie en el volumen de datos. El algoritmo asegura un renderizado robusto y eficiente. Se utiliza un método basado en las técnicas de “marching tetrahedra” para la extracción local de una isosuperficie continua, lineal y definida por intervalos. La robustez deriva tanto de una etapa de detección de colisiones continua de la isosuperficie extraída, como del uso de técnicas eficientes de renderizado basadas en un proxy puntual. El método de “marching tetrahedra” propuesto garantiza que la topología de la isosuperficie extraída coincida con la topología de una isosuperficie equivalente determinada utilizando una interpolación trilineal. Además, con el objetivo de mejorar la coherencia entre la información háptica y la información visual, el algoritmo de renderizado háptico calcula un segundo proxy en la isosuperficie pintada en la pantalla. En este trabajo se demuestra experimentalmente las mejoras en, primero, la etapa de extracción de isosuperficie, segundo, la robustez a la hora de mantener el proxy en la isosuperficie deseada y finalmente la eficiencia del algoritmo. En segundo lugar, a partir del algoritmo de renderizado háptico propuesto, se desarrolló un procedimiento, en cuatro etapas, para la reconstrucción de espinas dendríticas. Este procedimiento, se puede integrar en los cauces de segmentación automática y semiautomática existentes como una etapa de pre-proceso previa. El procedimiento está diseñando para que tanto la navegación como el proceso de edición en sí mismo estén controlados utilizando un dispositivo háptico. Se han diseñado dos experimentos para evaluar esta técnica. El primero evalúa la aportación de la retroalimentación háptica y el segundo se centra en evaluar la idoneidad del uso de un háptico como dispositivo de entrada. En ambos casos, los resultados demuestran que nuestro procedimiento mejora la precisión de la reconstrucción. En este trabajo se describen también dos casos de uso de nuestro procedimiento en el ámbito de la neurociencia: el primero aplicado a neuronas situadas en la corteza cerebral humana y el segundo aplicado a espinas dendríticas situadas a lo largo de neuronas piramidales de la corteza del cerebro de una rata. Por último, presentamos el programa, Neuro Haptic Editor, desarrollado a lo largo de esta tesis junto con los diferentes algoritmos ya mencionados. ABSTRACT This thesis took place within the Cajal Blue Brain project, a European initiative dedicated to the study of the brain. One of the main goals of this project is the development of new methods and technologies simplifying data analysis in neuroscience. This thesis focused on the development of tools combining information originating from distinct sensory channels with the aim of accelerating both the interaction with neuroscience images and their analysis. In concrete terms, the objective is to study the possibility of combining visual information with haptic information. Dendritic spines are thin protrusions that cover the dendritic surface of numerous neurons in the brain and whose function seems to play a key role in neural circuits. The interest of the neuroscience community toward those structures kept increasing as and when acquisition methods improved, eventually to the point that the produced datasets enabled their analysis. Quite often, neuroscientists use light microscopy techniques to produce the dataset that will allow them to analyse neuronal structures such as neurons, dendrites and dendritic spines. While offering some advantages compared to their electronic counterpart, light microscopy techniques achieve lower resolutions. Particularly, small structures such as dendritic spines might suffer from a very low level of fluorescence in the final dataset, preventing further analysis. This thesis introduces a new technique enabling the edition of volumetric datasets in order to recreate dendritic spine necks using a haptic device. In order to fulfil this objective, we first presented an algorithm to provide haptic feedback directly from volumetric datasets, as an aid to regular visualization. The haptic rendering algorithm lets users perceive isosurfaces in volumetric datasets, and it relies on several design features that ensure a robust and efficient rendering. A marching tetrahedra approach enables the dynamic extraction of a piecewise linear continuous isosurface. Robustness is derived using a Continuous Collision Detection step coupled with acknowledged proxy-based rendering methods over the extracted isosurface. The introduced marching tetrahedra approach guarantees that the extracted isosurface will match the topology of an equivalent isosurface computed using trilinear interpolation. The proposed haptic rendering algorithm improves the coherence between haptic and visual cues computing a second proxy on the isosurface displayed on screen. Three experiments demonstrate the improvements on the isosurface extraction stage as well as the robustness and the efficiency of the complete algorithm. We then introduce our four-steps procedure for the complete reconstruction of dendritic spines. Based on our haptic rendering algorithm, this procedure is intended to work as an image processing stage before the automatic segmentation step giving the final representation of the dendritic spines. The procedure is designed to allow both the navigation and the volume image editing to be carried out using a haptic device. We evaluated our procedure through two experiments. The first experiment concerns the benefits of the force feedback and the second checks the suitability of the use of a haptic device as input. In both cases, the results shows that the procedure improves the editing accuracy. We also report two concrete cases where our procedure was employed in the neuroscience field, the first one concerning dendritic spines in the human cortex, the second one referring to an ongoing experiment studying dendritic spines along dendrites of mouse cortical pyramidal neurons. Finally, we present the software program, Neuro Haptic Editor, that was built along the development of the different algorithms implemented during this thesis, and used by neuroscientists to use our procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lower stability limit for axisymmetric floating zones at rest between equal coaxial disks has been experimentally verified for several disk-separation/disk-diameter ratios by using the neutral buoyancy technique. Results show a close agre ment with theory in the case of bridge disruption and a wide scatter in the case of bridge etachment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methylation of cytosines in the dinucleotide CpG has been shown to suppress transcription of a number of tissue-specific genes, yet the precise mechanism is not fully understood. The vertebrate globin genes were among the first examples in which an inverse correlation was shown between CpG methylation and transcription. We studied the methylation pattern of the 235-bp ρ-globin gene promoter in genomic DNA from primary chicken erythroid cells using the sodium bisulfite conversion technique and found all CpGs in the promoter to be methylated in erythroid cells from adult chickens in which the ρ-globin gene is silent but unmethylated in 5-day (primitive) embryonic red cells in which the gene is transcribed. To elucidate further the mechanism of methylation-induced silencing, an expression construct consisting of 235 bp of 5′ promoter sequence of the ρ-globin gene along with a strong 5′ erythroid enhancer driving a chloramphenicol acetyltransferase reporter gene, ρ-CAT, was transfected into primary avian erythroid cells derived from 5-day embryos. Methylation of just the 235-bp ρ-globin gene promoter fragment at every CpG resulted in a 20- to 30-fold inhibition of transcription, and this effect was not overridden by the presence of potent erythroid-specific enhancers. The ability of the 235-bp ρ-globin gene promoter to bind to a DNA Methyl Cytosine binding Protein Complex (MeCPC) was tested in electrophoretic mobility shift assays utilizing primary avian erythroid cell nuclear extract. The results were that fully methylated but not unmethylated 235-bp ρ-globin gene promoter fragment could compete efficiently for MeCPC binding. These results are a direct demonstration that site-specific methylation of a globin gene promoter at the exact CpGs that are methylated in vivo can silence transcription in homologous primary erythroid cells. Further, these data implicate binding of MeCPC to the promoter in the mechanism of silencing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transformation-associated recombination (TAR) cloning technique allows selective and accurate isolation of chromosomal regions and genes from complex genomes. The technique is based on in vivo recombination between genomic DNA and a linearized vector containing homologous sequences, or hooks, to the gene of interest. The recombination occurs during transformation of yeast spheroplasts that results in the generation of a yeast artificial chromosome (YAC) containing the gene of interest. To further enhance and refine the TAR cloning technology, we determined the minimal size of a specific hook required for gene isolation utilizing the Tg.AC mouse transgene as a targeted region. For this purpose a set of vectors containing a B1 repeat hook and a Tg.AC-specific hook of variable sizes (from 20 to 800 bp) was constructed and checked for efficiency of transgene isolation by a radial TAR cloning. When vectors with a specific hook that was ≥60 bp were utilized, ∼2% of transformants contained circular YACs with the Tg.AC transgene sequences. Efficiency of cloning dramatically decreased when the TAR vector contained a hook of 40 bp or less. Thus, the minimal length of a unique sequence required for gene isolation by TAR is ∼60 bp. No transgene-positive YAC clones were detected when an ARS element was incorporated into a vector, demonstrating that the absence of a yeast origin of replication in a vector is a prerequisite for efficient gene isolation by TAR cloning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constant pressure and temperature molecular dynamics techniques have been employed to investigate the changes in structure and volumes of two globular proteins, superoxide dismutase and lysozyme, under pressure. Compression (the relative changes in the proteins' volumes), computed with the Voronoi technique, is closely related with the so-called protein intrinsic compressibility, estimated by sound velocity measurements. In particular, compression computed with Voronoi volumes predicts, in agreement with experimental estimates, a negative bound water contribution to the apparent protein compression. While the use of van der Waals and molecular volumes underestimates the intrinsic compressibilities of proteins, Voronoi volumes produce results closer to experimental estimates. Remarkably, for two globular proteins of very different secondary structures, we compute identical (within statistical error) protein intrinsic compressions, as predicted by recent experimental studies. Changes in the protein interatomic distances under compression are also investigated. It is found that, on average, short distances compress less than longer ones. This nonuniform contraction underlines the peculiar nature of the structural changes due to pressure in contrast with temperature effects, which instead produce spatially uniform changes in proteins. The structural effects observed in the simulations at high pressure can explain protein compressibility measurements carried out by fluorimetric and hole burning techniques. Finally, the calculation of the proteins static structure factor shows significant shifts in the peaks at short wavenumber as pressure changes. These effects might provide an alternative way to obtain information concerning compressibilities of selected protein regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUÇÃO: A restrição de crescimento fetal (RCF) representa uma das principais complicações da gravidez e está associada a elevadas taxas de morbimortalidade perinatal. A frequência de desfechos desfavoráveis neonatais está diretamente relacionada à gravidade da RCF, sendo que os casos de pior evolução estão relacionados com peso abaixo do percentil 3. O mecanismo do crescimento fetal não está totalmente esclarecido, mas resulta da interação entre potencial genético de crescimento e fatores placentários, maternos e ambientais. Dentre os fatores etiológicos, o desenvolvimento anormal da placenta e a diminuição da perfusão uteroplacentária são as principais causas de RCF. Este estudo teve por objetivo avaliar volume e índices de vascularização placentários, por meio da ultrassonografia tridimensional (US3D), em gestações com RCF grave, e as correlações dos parâmetros placentários com valores de normalidade e dopplervelocimetria materno-fetal. MÉTODOS: Foram avaliadas 27 gestantes cujos fetos apresentavam peso estimado abaixo do percentil 3 para a idade gestacional. Por meio da US3D, utilizando-se a técnica VOCAL, foram mensurados o volume placentário (VP) e os índices vasculares: índice de vascularização (IV), índice de fluxo (IF) e índice de vascularização e fluxo (IVF). Os dados foram comparados com a curva de normalidade para a idade gestacional e peso fetal descrita por De Paula e cols. (2008, 2009). Desde que os volumes placentários variam durante a gravidez, os valores observados foram comparados com os valores esperados para a idade gestacional e peso fetal. Foram criados os índices volume observado/ esperado para a idade gestacional (Vo/e IG) e volume placentário observado/ esperado para o peso fetal (Vo/e PF). Os parâmetros placentários foram correlacionados com índice de pulsatilidade (IP) médio de (AUt) e IP de artéria umbilical (AU), e avaliados segundo a presença de incisura protodiastólica bilateral em AUt. RESULTADOS: Quando comparadas à curva de normalidade, as placentas de gestação com RCF grave apresentaram VP, IV, IF e IVF significativamente menores (p < 0,0001 para todos os parâmetros). Houve correlação inversa estatisticamente significante da média do PI de AUt com o Vo/e IG (r= -0,461, p= 0,018), IV (r= -0,401, p= 0,042) e IVF (r= -0,421, p= 0,048). No grupo de gestantes que apresentavam incisura protodiastólica bilateral de artérias uterinas, Vo/e IG (p= 0,014), Vo/e PF (p= 0,02) e IV (p= 0,044) foram significativamente mais baixos. Nenhum dos parâmetros placentários apresentou correlação significativa com IP de AU. CONCLUSÕES: Observou-se que o volume e os índices de vascularização placentários apresentam-se diminuídos nos fetos com RCF grave. IP médio de AUT apresenta correlação negativa com Vo/e IG, IV e IVF, e Vo/e IG, Vo/e PF e IV apresentaram-se reduzidos nos casos de incisura bilateral. Não houve correlação significativa dos parâmetros placentários com IP de AU

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Greater Himalayan leucogranites are a discontinuous suite of intrusions emplaced in a thickened crust during the Miocene southward ductile extrusion of the Himalayan metamorphic core. Melt-induced weakening is thought to have played a critical role in strain localization that facilitated the extrusion. Recent advancements in centrifuge analogue modelling techniques allow for the replication of a broader range of crustal deformation behaviors, enhancing our understanding of large hot orogens. Polydimethylsiloxane (PDMS) is commonly used in centrifuge experiments to model weak melt zones. Difficulties in handling PDMS had, until now, limited its emplacement in models prior to any deformation. A new modelling technique has been developed where PDMS is emplaced into models that have been subjected to some shortening. This technique aims to better understand the effects of melt on strain localization and potential decoupling between structural levels within an evolving orogenic system. Models are subjected to an early stage of shortening, followed by the introduction of PDMS, and then a final stage of shortening. Theoretical percentages of partial melt and their effect on rock strength are considered when adding a specific percentage of PDMS in each model. Due to the limited size of the models, only PDMS sheets of 3 mm thickness were used, which varied in length and width. Within undeformed packages, minimal surface and internal deformation occurred when PDMS is emplaced in the lower layer of the model, showing a vertical volume increase of ~20% within the package; whereas the emplacement of PDMS into the middle layer showed internal dragging of the middle laminations into the lower layer and a vertical volume increase ~30%. Emplacement of PDMS results in ~7% shortening for undeformed and deformed models. Deformed models undergo ~20% additional shortening after two rounds of deformation. Strain localization and decoupling between units occur in deformed models where the degree of deformation changes based on the amount of partial melt present. Surface deformation visible by the formation of a bulge, mode 1 extension cracks and varying surface strain ellipses varies depending if PDMS is present. Better control during emplacement is exhibited when PDMS is added into cooler models, resulting in reduced internal deformation within the middle layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through the processes of the biological pump, carbon is exported to the deep ocean in the form of dissolved and particulate organic matter. There are several ways by which downward export fluxes can be estimated. The great attraction of the 234Th technique is that its fundamental operation allows a downward flux rate to be determined from a single water column profile of thorium coupled to an estimate of POC/234Th ratio in sinking matter. We present a database of 723 estimates of organic carbon export from the surface ocean derived from the 234Th technique. Data were collected from tables in papers published between 1985 and 2013 only. We also present sampling dates, publication dates and sampling areas. Most of the open ocean Longhurst provinces are represented by several measurements. However, the Western Pacific, the Atlantic Arctic, South Pacific and the South Indian Ocean are not well represented. There is a variety of integration depths ranging from surface to 220m. Globally the fluxes ranged from -22 to 125 mmol of C/m**2/d. We believe that this database is important for providing new global estimate of the magnitude of the biological carbon pump.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes section "Revue des livres."