940 resultados para MOST PROBABLE NUMBER


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We determined the numbers of free-living and associated (aggregated or bonded with particles) bacteria in the coastal water of King George Island at an offshore (St. 1) and a nearshore station (St. 2) as a function of physico-chemical parameters. Water sampIes were collected between March and October at St. 1 and between April and October at St. 2. Direct counts of total bacteria varied from 0.53*10**8 to 5.02*10**8 cells/l. Associated microorganisms accounted for 5 to 20 % of the total number of bacteria. Strong Spearman and Pearson correlations were observed (R = 0.82; P = 0.001) between the numbers of free-living and associated bacteria at St. 1. These two groups of bacteria were nearly evenly distributed in the horizontal transects from inshore to offshore waters at depths of 1-10 m in Ardley Cove. There were no substantial differences in the numbers of either free-living or associated bacteria in vertical transects too. Their number at St. 1, but not at St. 2, correlated significantly with all tested environmental parameters (salinity, temperature, solar radiation, nitrate, phosphate and chlorophyll a concentrations), except nitrite concentrations in water. The most probable reason for these correlations is that a common seasonal trend is characteristic of most tested parameters during the March to October period.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pack ice around Svalbard was sampled during the expedition ARK XIX/1 of RV "Polarstern" (March-April 2003) in order to determine environmental conditions, species composition and abundances of sea-ice algae and heterotrophic protists during late winter. As compared to other seasons, species diversity of algae (total 40 taxa) was not low, but abundances (5,000-448,000 cells/l) were lower by one to two orders of magnitude. Layers of high algal abundances were observed both at the bottom and in the ice interior. Inorganic nutrient concentrations (NO2, NO3, PO4, Si(OH)4) within the ice were mostly higher than during other seasons, and enriched compared to seawater by enrichment indices of 1.6-24.6 (corrected for losses through the desalination process). Thus, the survival of algae in Arctic pack ice was not limited by nutrients at the beginning of the productive season. Based on less-detailed physical data, light was considered as the most probable factor controlling the onset of the spring ice-algal bloom in the lower part of the ice, while low temperatures and salinities inhibit algal growth in the upper part of the ice at the end of the winter. Incorporation of ice algae probably took place during the entire freezing period. Possible overwintering strategies during the dark period, such as facultative heterotrophy, energy reserves, and resting spores are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Tithonian sequence of shallow-water limestones, intercalated with siliciclastics and overlain by dolomite, was recovered during drilling at ODP Site 639 on the edge of a tilted fault block. The carbonates were strongly affected by fracturing, dolomitization, dedolomitization, and compaction. The chronology and nature of the fractures, fracture infilling, and diagenesis of the host rock are established and correlated for both the limestone and the dolomite. A first phase of dolomitization affected limestone that was already, at least partially, indurated. In the limestone unit, fractures were filled by calcite and dolomite; most of the dolomite was recrystallized into calcite, except for the upper part. In the dolomitic unit, the first-formed dolomite was progressively recrystallized into saddle dolomite, as fractures were simultaneously activated. The dolomitic textures become less magnesian (the molar ratio mMg/mCa goes from 1.04-0.98 to 0.80), and the d18O (PDB) ranges from -10 per mil to -8 per mil. The varying pores and fissures are either cemented by a calcic saddle dolomite (mMg/mCa ranging from 0.95 to 0.80) or filled with diverse internal sediments of detrital calcic dolomite, consisting of detrital dolomite silt (d18O from -9 per mil to -7 per mil) and laminated yellow filling (with different d18O values that range from -4 per mil to +3 per mil). These internal sediments clearly contain elements of the host rock and fragments of saddle crystals. They are covered by marls with calpionellids of early Valanginian age, which permits dating of most of the diagenetic phases as pre-Valanginian. The dolomitization appears to be related to fracturing resulting from extensional tectonics; it is also partially related to an erosional episode. Two models of dolomitization can be proposed from the petrographic characteristics and isotopic data. Early replacement of aragonite bioclasts by sparite, dissolution linked to dolomitization, and negative d18O values of dolomite suggest a freshwater influence and 'mixing zone' model. On the other hand, the significant presence of saddle dolomite and repeated negative d18O values suggest a temperature effect; because we can dismiss deep burial, hydrothermal formation of dolomite would be the most probable model. For both of these hypotheses, the vadose filling of cavities and fractures by silt suggests emersion, and the different, and even positive, d18O values of the last-formed yellow internal sediment could suggest dolomitization of the top of the sequence under saline to hypersaline conditions. Fracturing resulting in the reopening of porosity and the draining of dolomitizing fluids was linked to extensional tectonics prior to the tilting of the block. These features indicate an earlier beginning to the rifting of the Iberian margin than previously known. Dolomitization, emersion, and erosion correspond to eustatic sea-level lowering at the Berriasian/Valanginian boundary. Diagenesis, rather than sedimentation, seems to mark this global event and to provide a record of the regional tectonic history.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sediment cores were recovered from the New Ireland Basin, east of Papua New Guinea, in order to investigate the late Quaternary eruptive history of the Tabar-Lihir-Tanga-Feni (TLTF) volcanic chain. Foraminifera d18O profiles were matched to the low-latitude oxygen isotope record to date the cores, which extend back to the early part of d18O Stage 9 (333 ka). Sedimentation rates decrease from >10 cm/1000 yr in cores near New Ireland to ~2 cm/1000 yr further offshore. The cores contain 36 discrete ash beds, mostly 1-8 cm thick and interpreted as either fallout or distal turbidite deposits. Most beds have compositionally homogeneous glass shard populations, indicating that they represent single volcanic events. Shards from all ash beds have the subduction-related pattern of strong enrichment in the large-ion lithophile elements relative to MORB, but three distinct compositional groups are apparent: Group A beds are shoshonitic and characterised by >1300 ppm Sr, high Ce/Yb and high Nb/Yb relative to MORB, Group B beds form a high-K series with MORB-like Nb/Yb but high Ce/Yb and well-developed negative Eu anomalies, whereas Group C beds are transitional between the low-K and medium-K series and characterised by flat chondrite-normalised REE patterns with low Nb/Yb relative to MORB. A comparison with published data from the TLTF chain, the New Britain volcanic arc and backarc including Rabaul, and Bagana on Bougainville demonstrates that only Group A beds share the distinctive phenocryst assemblage and shoshonitic geochemistry of the TLTF lavas. The crystal- and lithic-rich character of the Group A beds point to a nearby source, and their high Sr, Ce/Yb and Nb/Yb match those of Tanga and Feni lavas. A youthful stratocone on the eastern side of Babase Island in the Feni group is the most probable source. Group A beds younger than 20 ka are more fractionated than the older Group A beds, and record the progressive development of a shallow level magma chamber beneath the cone. In contrast, Group B beds represent glass-rich fallout from voluminous eruptions at Rabaul, whereas Group C beds represent distal glass-rich fallout from elsewhere along the volcanic front of the New Britain arc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta tesis se centra en el estudio de medios granulares blandos y atascados mediante la aplicación de la física estadística. Esta aproximación se sitúa entre los tradicionales enfoques macro y micromecánicos: trata de establecer cuáles son las propiedades macroscópicas esperables de un sistema granular en base a un análisis de las propiedades de las partículas y las interacciones que se producen entre ellas y a una consideración de las restricciones macroscópicas del sistema. Para ello se utiliza la teoría estadística junto con algunos principios, conceptos y definiciones de la teoría de los medios continuos (campo de tensiones y deformaciones, energía potencial elástica, etc) y algunas técnicas de homogeneización. La interacción entre las partículas es analizada mediante las aportaciones de la teoría del contacto y de las fuerzas capilares (producidas por eventuales meniscos de líquido cuando el medio está húmedo). La idea básica de la mecánica estadística es que entre todas soluciones de un problema físico (como puede ser el ensamblaje en equilibrio estático de partículas de un medio granular) existe un conjunto que es compatible con el conocimiento macroscópico que tenemos del sistema (por ejemplo, su volumen, la tensión a la que está sometido, la energía potencial elástica que almacena, etc.). Este conjunto todavía contiene un número enorme de soluciones. Pues bien, si no hay ninguna información adicional es razonable pensar que no existe ningún motivo para que alguna de estas soluciones sea más probable que las demás. Entonces parece natural asignarles a todas ellas el mismo peso estadístico y construir una función matemática compatible. Actuando de este modo se obtiene cuál es la función de distribución más probable de algunas cantidades asociadas a las soluciones, para lo cual es muy importante asegurarse de que todas ellas son igualmente accesibles por el procedimiento de ensamblaje o protocolo. Este enfoque se desarrolló en sus orígenes para el estudio de los gases ideales pero se puede extender para sistemas no térmicos como los analizados en esta tesis. En este sentido el primer intento se produjo hace poco más de veinte años y es la colectividad de volumen. Desde entonces esta ha sido empleada y mejorada por muchos investigadores en todo el mundo, mientras que han surgido otras, como la de la energía o la del fuerza-momento (tensión multiplicada por volumen). Cada colectividad describe, en definitiva, conjuntos de soluciones caracterizados por diferentes restricciones macroscópicas, pero de todos ellos resultan distribuciones estadísticas de tipo Maxwell-Boltzmann y controladas por dichas restricciones. En base a estos trabajos previos, en esta tesis se ha adaptado el enfoque clásico de la física estadística para el caso de medios granulares blandos. Se ha propuesto un marco general para estudiar estas colectividades que se basa en la comparación de todas las posibles soluciones en un espacio matemático definido por las componentes del fuerza-momento y en unas funciones de densidad de estados. Este desarrollo teórico se complementa con resultados obtenidos mediante simulación de la compresión cíclica de sistemas granulares bidimensionales. Se utilizó para ello un método de dinámica molecular, MD (o DEM). Las simulaciones consideran una interacción mecánica elástica, lineal y amortiguada a la que se ha añadido, en algunos casos, la fuerza cohesiva producida por meniscos de agua. Se realizaron cálculos en serie y en paralelo. Los resultados no solo prueban que las funciones de distribución de las componentes de fuerza-momento del sistema sometido a un protocolo específico parecen ser universales, sino que también revelan que existen muchos aspectos computacionales que pueden determinar cuáles son las soluciones accesibles. This thesis focuses on the application of statistical mechanics for the study of static and jammed packings of soft granular media. Such approach lies between micro and macromechanics: it tries to establish what the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles, and the interactions between them, and by considering the macroscopic constraints of the system. To do that, statistics together with some principles, concepts and definitions of continuum mechanics (e.g. stress and strain fields, elastic potential energy, etc.) as well as some homogenization techniques are used. The interaction between the particles of a granular system is examined too and theories on contact and capillary forces (when the media are wet) are revisited. The basic idea of statistical mechanics is that among the solutions of a physical problem (e.g. the static arrangement of particles in mechanical equilibrium) there is a class that is compatible with our macroscopic knowledge of the system (volume, stress, elastic potential energy,...). This class still contains an enormous number of solutions. In the absence of further information there is not any a priori reason for favoring one of these more than any other. Hence we shall naturally construct the equilibrium function by assigning equal statistical weights to all the functions compatible with our requirements. This procedure leads to the most probable statistical distribution of some quantities, but it is necessary to guarantee that all the solutions are likely accessed. This approach was originally set up for the study of ideal gases, but it can be extended to non-thermal systems too. In this connection, the first attempt for granular systems was the volume ensemble, developed about 20 years ago. Since then, this model has been followed and improved upon by many researchers around the world, while other two approaches have also been set up: energy and force-moment (i.e. stress multiplied by volume) ensembles. Each ensemble is described by different macroscopic constraints but all of them result on a Maxwell-Boltzmann statistical distribution, which is precisely controlled by the respective constraints. According to this previous work, in this thesis the classical statistical mechanics approach is introduced and adapted to the case of soft granular media. A general framework, which includes these three ensembles and uses a force-moment phase space and a density of states function, is proposed. This theoretical development is complemented by molecular dynamics (or DEM) simulations of the cyclic compression of 2D granular systems. Simulations were carried out by considering spring-dashpot mechanical interactions and attractive capillary forces in some cases. They were run on single and parallel processors. Results not only prove that the statistical distributions of the force-moment components obtained with a specific protocol seem to be universal, but also that there are many computational issues that can determine what the attained packings or solutions are.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La comparación de las diferentes ofertas presentadas en la licitación de un proyecto,con el sistema de contratación tradicional de medición abierta y precio unitario cerrado, requiere herramientas de análisis que sean capaces de discriminar propuestas que teniendo un importe global parecido pueden presentar un impacto económico muy diferente durante la ejecución. Una de las situaciones que no se detecta fácilmente con los métodos tradicionales es el comportamiento del coste real frente a las variaciones de las cantidades realmente ejecutadas en obra respecto de las estimadas en el proyecto. Este texto propone abordar esta situación mediante un sistema de análisis cuantitativo del riesgo como el método de Montecarlo. Este procedimiento, como es sabido, consiste en permitir que los datos de entrada que definen el problema varíen unas funciones de probabilidad definidas, generar un gran número de casos de prueba y tratar los resultados estadísticamente para obtener los valores finales más probables,con los parámetros necesarios para medir la fiabilidad de la estimación. Se presenta un modelo para la comparación de ofertas, desarrollado de manera que puede aplicarse en casos reales aplicando a los datos conocidos unas condiciones de variación que sean fáciles de establecer por los profesionales que realizan estas tareas. ABSTRACT: The comparison of the different bids in the tender for a project, with the traditional contract system based on unit rates open to and re-measurement, requires analysis tools that are able to discriminate proposals having a similar overall economic impact, but that might show a very different behaviour during the execution of the works. One situation not easily detected by traditional methods is the reaction of the actual cost to the changes in the exact quantity of works finally executed respect of the work estimated in the project. This paper intends to address this situation through the Monte Carlo method, a system of quantitative risk analysis. This procedure, as is known, is allows the input data defining the problem to vary some within well defined probability functions, generating a large number of test cases, the results being statistically treated to obtain the most probable final values, with the rest of the parameters needed to measure the reliability of the estimate. We present a model for the comparison of bids, designed in a way that it can be applied in real cases, based on data and assumptions that are easy to understand and set up by professionals who wish to perform these tasks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La región cerca de la pared de flujos turbulentos de pared ya está bien conocido debido a su bajo número de Reynolds local y la separación escala estrecha. La región lejos de la pared (capa externa) no es tan interesante tampoco, ya que las estadísticas allí se escalan bien por las unidades exteriores. La región intermedia (capa logarítmica), sin embargo, ha estado recibiendo cada vez más atención debido a su propiedad auto-similares. Además, de acuerdo a Flores et al. (2007) y Flores & Jiménez (2010), la capa logarítmica es más o menos independiente de otras capas, lo que implica que podría ser inspeccionado mediante el aislamiento de otras dos capas, lo que reduciría significativamente los costes computacionales para la simulación de flujos turbulentos de pared. Algunos intentos se trataron después por Mizuno & Jiménez (2013), quien simulan la capa logarítmica sin la región cerca de la pared con estadísticas obtenidas de acuerdo razonablemente bien con los de las simulaciones completas. Lo que más, la capa logarítmica podría ser imitado por otra turbulencia sencillo de cizallamiento de motor. Por ejemplo, Pumir (1996) encontró que la turbulencia de cizallamiento homogéneo estadísticamente estacionario (SS-HST) también irrumpe, de una manera muy similar al proceso de auto-sostenible en flujos turbulentos de pared. Según los consideraciones arriba, esta tesis trata de desvelar en qué medida es la capa logarítmica de canales similares a la turbulencia de cizalla más sencillo, SS-HST, mediante la comparación de ambos cinemática y la dinámica de las estructuras coherentes en los dos flujos. Resultados sobre el canal se muestran mediante Lozano-Durán et al. (2012) y Lozano-Durán & Jiménez (2014b). La hoja de ruta de esta tarea se divide en tres etapas. En primer lugar, SS-HST es investigada por medio de un código nuevo de simulación numérica directa, espectral en las dos direcciones horizontales y compacto-diferencias finitas en la dirección de la cizalla. Sin utiliza remallado para imponer la condición de borde cortante periódica. La influencia de la geometría de la caja computacional se explora. Ya que el HST no tiene ninguna longitud característica externa y tiende a llenar el dominio computacional, las simulaciopnes a largo plazo del HST son ’mínimos’ en el sentido de que contiene sólo unas pocas estructuras media a gran escala. Se ha encontrado que el límite principal es el ancho de la caja de la envergadura, Lz, que establece las escalas de longitud y velocidad de la turbulencia, y que las otras dos dimensiones de la caja debe ser suficientemente grande (Lx > 2LZ, Ly > Lz) para evitar que otras direcciones estando limitado también. También se encontró que las cajas de gran longitud, Lx > 2Ly, par con el paso del tiempo la condición de borde cortante periódica, y desarrollar fuertes ráfagas linealizadas no físicos. Dentro de estos límites, el flujo muestra similitudes y diferencias interesantes con otros flujos de cizalla, y, en particular, con la capa logarítmica de flujos turbulentos de pared. Ellos son exploradas con cierto detalle. Incluyen un proceso autosostenido de rayas a gran escala y con una explosión cuasi-periódica. La escala de tiempo de ruptura es de aproximadamente universales, ~20S~l(S es la velocidad de cizallamiento media), y la disponibilidad de dos sistemas de ruptura diferentes permite el crecimiento de las ráfagas a estar relacionado con algo de confianza a la cizalladura de turbulencia inicialmente isotrópico. Se concluye que la SS-HST, llevado a cabo dentro de los parámetros de cílculo apropiados, es un sistema muy prometedor para estudiar la turbulencia de cizallamiento en general. En segundo lugar, las mismas estructuras coherentes como en los canales estudiados por Lozano-Durán et al. (2012), es decir, grupos de vórticidad (fuerte disipación) y Qs (fuerte tensión de Reynolds tangencial, -uv) tridimensionales, se estudia mediante simulación numérica directa de SS-HST con relaciones de aspecto de cuadro aceptables y número de Reynolds hasta Rex ~ 250 (basado en Taylor-microescala). Se discute la influencia de la intermitencia de umbral independiente del tiempo. Estas estructuras tienen alargamientos similares en la dirección sentido de la corriente a las familias separadas en los canales hasta que son de tamaño comparable a la caja. Sus dimensiones fractales, longitudes interior y exterior como una función del volumen concuerdan bien con sus homólogos de canales. El estudio sobre sus organizaciones espaciales encontró que Qs del mismo tipo están alineados aproximadamente en la dirección del vector de velocidad en el cuadrante al que pertenecen, mientras Qs de diferentes tipos están restringidos por el hecho de que no debe haber ningún choque de velocidad, lo que hace Q2s (eyecciones, u < 0,v > 0) y Q4s (sweeps, u > 0,v < 0) emparejado en la dirección de la envergadura. Esto se verifica mediante la inspección de estructuras de velocidad, otros cuadrantes como la uw y vw en SS-HST y las familias separadas en el canal. La alineación sentido de la corriente de Qs ligada a la pared con el mismo tipo en los canales se debe a la modulación de la pared. El campo de flujo medio condicionado a pares Q2-Q4 encontró que los grupos de vórticidad están en el medio de los dos, pero prefieren los dos cizalla capas alojamiento en la parte superior e inferior de Q2s y Q4s respectivamente, lo que hace que la vorticidad envergadura dentro de las grupos de vórticidad hace no cancele. La pared amplifica la diferencia entre los tamaños de baja- y alta-velocidad rayas asociados con parejas de Q2-Q4 se adjuntan como los pares alcanzan cerca de la pared, el cual es verificado por la correlación de la velocidad del sentido de la corriente condicionado a Q2s adjuntos y Q4s con diferentes alturas. Grupos de vórticidad en SS-HST asociados con Q2s o Q4s también están flanqueadas por un contador de rotación de los vórtices sentido de la corriente en la dirección de la envergadura como en el canal. La larga ’despertar’ cónica se origina a partir de los altos grupos de vórticidad ligada a la pared han encontrado los del Álamo et al. (2006) y Flores et al. (2007), que desaparece en SS-HST, sólo es cierto para altos grupos de vórticidad ligada a la pared asociados con Q2s pero no para aquellos asociados con Q4s, cuyo campo de flujo promedio es en realidad muy similar a la de SS-HST. En tercer lugar, las evoluciones temporales de Qs y grupos de vórticidad se estudian mediante el uso de la método inventado por Lozano-Durán & Jiménez (2014b). Las estructuras se clasifican en las ramas, que se organizan más en los gráficos. Ambas resoluciones espaciales y temporales se eligen para ser capaz de capturar el longitud y el tiempo de Kolmogorov puntual más probable en el momento más extrema. Debido al efecto caja mínima, sólo hay un gráfico principal consiste en casi todas las ramas, con su volumen y el número de estructuras instantáneo seguien la energía cinética y enstrofía intermitente. La vida de las ramas, lo que tiene más sentido para las ramas primarias, pierde su significado en el SS-HST debido a las aportaciones de ramas primarias al total de Reynolds estrés o enstrofía son casi insignificantes. Esto también es cierto en la capa exterior de los canales. En cambio, la vida de los gráficos en los canales se compara con el tiempo de ruptura en SS-HST. Grupos de vórticidad están asociados con casi el mismo cuadrante en términos de sus velocidades medias durante su tiempo de vida, especialmente para los relacionados con las eyecciones y sweeps. Al igual que en los canales, las eyecciones de SS-HST se mueven hacia arriba con una velocidad promedio vertical uT (velocidad de fricción) mientras que lo contrario es cierto para los barridos. Grupos de vórticidad, por otra parte, son casi inmóvil en la dirección vertical. En la dirección de sentido de la corriente, que están advección por la velocidad media local y por lo tanto deforman por la diferencia de velocidad media. Sweeps y eyecciones se mueven más rápido y más lento que la velocidad media, respectivamente, tanto por 1.5uT. Grupos de vórticidad se mueven con la misma velocidad que la velocidad media. Se verifica que las estructuras incoherentes cerca de la pared se debe a la pared en vez de pequeño tamaño. Los resultados sugieren fuertemente que las estructuras coherentes en canales no son especialmente asociado con la pared, o incluso con un perfil de cizalladura dado. ABSTRACT Since the wall-bounded turbulence was first recognized more than one century ago, its near wall region (buffer layer) has been studied extensively and becomes relatively well understood due to the low local Reynolds number and narrow scale separation. The region just above the buffer layer, i.e., the logarithmic layer, is receiving increasingly more attention nowadays due to its self-similar property. Flores et al. (20076) and Flores & Jim´enez (2010) show that the statistics of logarithmic layer is kind of independent of other layers, implying that it might be possible to study it separately, which would reduce significantly the computational costs for simulations of the logarithmic layer. Some attempts were tried later by Mizuno & Jimenez (2013), who simulated the logarithmic layer without the buffer layer with obtained statistics agree reasonably well with those of full simulations. Besides, the logarithmic layer might be mimicked by other simpler sheardriven turbulence. For example, Pumir (1996) found that the statistically-stationary homogeneous shear turbulence (SS-HST) also bursts, in a manner strikingly similar to the self-sustaining process in wall-bounded turbulence. Based on these considerations, this thesis tries to reveal to what extent is the logarithmic layer of channels similar to the simplest shear-driven turbulence, SS-HST, by comparing both kinematics and dynamics of coherent structures in the two flows. Results about the channel are shown by Lozano-Dur´an et al. (2012) and Lozano-Dur´an & Jim´enez (20146). The roadmap of this task is divided into three stages. First, SS-HST is investigated by means of a new direct numerical simulation code, spectral in the two horizontal directions and compact-finite-differences in the direction of the shear. No remeshing is used to impose the shear-periodic boundary condition. The influence of the geometry of the computational box is explored. Since HST has no characteristic outer length scale and tends to fill the computational domain, longterm simulations of HST are ‘minimal’ in the sense of containing on average only a few large-scale structures. It is found that the main limit is the spanwise box width, Lz, which sets the length and velocity scales of the turbulence, and that the two other box dimensions should be sufficiently large (Lx > 2LZ, Ly > Lz) to prevent other directions to be constrained as well. It is also found that very long boxes, Lx > 2Ly, couple with the passing period of the shear-periodic boundary condition, and develop strong unphysical linearized bursts. Within those limits, the flow shows interesting similarities and differences with other shear flows, and in particular with the logarithmic layer of wallbounded turbulence. They are explored in some detail. They include a self-sustaining process for large-scale streaks and quasi-periodic bursting. The bursting time scale is approximately universal, ~ 20S~l (S is the mean shear rate), and the availability of two different bursting systems allows the growth of the bursts to be related with some confidence to the shearing of initially isotropic turbulence. It is concluded that SS-HST, conducted within the proper computational parameters, is a very promising system to study shear turbulence in general. Second, the same coherent structures as in channels studied by Lozano-Dur´an et al. (2012), namely three-dimensional vortex clusters (strong dissipation) and Qs (strong tangential Reynolds stress, -uv), are studied by direct numerical simulation of SS-HST with acceptable box aspect ratios and Reynolds number up to Rex ~ 250 (based on Taylor-microscale). The influence of the intermittency to time-independent threshold is discussed. These structures have similar elongations in the streamwise direction to detached families in channels until they are of comparable size to the box. Their fractal dimensions, inner and outer lengths as a function of volume agree well with their counterparts in channels. The study about their spatial organizations found that Qs of the same type are aligned roughly in the direction of the velocity vector in the quadrant they belong to, while Qs of different types are restricted by the fact that there should be no velocity clash, which makes Q2s (ejections, u < 0, v > 0) and Q4s (sweeps, u > 0, v < 0) paired in the spanwise direction. This is verified by inspecting velocity structures, other quadrants such as u-w and v-w in SS-HST and also detached families in the channel. The streamwise alignment of attached Qs with the same type in channels is due to the modulation of the wall. The average flow field conditioned to Q2-Q4 pairs found that vortex clusters are in the middle of the pair, but prefer to the two shear layers lodging at the top and bottom of Q2s and Q4s respectively, which makes the spanwise vorticity inside vortex clusters does not cancel. The wall amplifies the difference between the sizes of low- and high-speed streaks associated with attached Q2-Q4 pairs as the pairs reach closer to the wall, which is verified by the correlation of streamwise velocity conditioned to attached Q2s and Q4s with different heights. Vortex clusters in SS-HST associated with Q2s or Q4s are also flanked by a counter rotating streamwise vortices in the spanwise direction as in the channel. The long conical ‘wake’ originates from tall attached vortex clusters found by del A´ lamo et al. (2006) and Flores et al. (2007b), which disappears in SS-HST, is only true for tall attached vortices associated with Q2s but not for those associated with Q4s, whose averaged flow field is actually quite similar to that in SS-HST. Third, the temporal evolutions of Qs and vortex clusters are studied by using the method invented by Lozano-Dur´an & Jim´enez (2014b). Structures are sorted into branches, which are further organized into graphs. Both spatial and temporal resolutions are chosen to be able to capture the most probable pointwise Kolmogorov length and time at the most extreme moment. Due to the minimal box effect, there is only one main graph consist by almost all the branches, with its instantaneous volume and number of structures follow the intermittent kinetic energy and enstrophy. The lifetime of branches, which makes more sense for primary branches, loses its meaning in SS-HST because the contributions of primary branches to total Reynolds stress or enstrophy are almost negligible. This is also true in the outer layer of channels. Instead, the lifetime of graphs in channels are compared with the bursting time in SS-HST. Vortex clusters are associated with almost the same quadrant in terms of their mean velocities during their life time, especially for those related with ejections and sweeps. As in channels, ejections in SS-HST move upwards with an average vertical velocity uτ (friction velocity) while the opposite is true for sweeps. Vortex clusters, on the other hand, are almost still in the vertical direction. In the streamwise direction, they are advected by the local mean velocity and thus deformed by the mean velocity difference. Sweeps and ejections move faster and slower than the mean velocity respectively, both by 1.5uτ . Vortex clusters move with the same speed as the mean velocity. It is verified that the incoherent structures near the wall is due to the wall instead of small size. The results suggest that coherent structures in channels are not particularly associated with the wall, or even with a given shear profile.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two variables define the topological state of closed double-stranded DNA: the knot type, K, and ΔLk, the linking number difference from relaxed DNA. The equilibrium distribution of probabilities of these states, P(ΔLk, K), is related to two conditional distributions: P(ΔLk|K), the distribution of ΔLk for a particular K, and P(K|ΔLk) and also to two simple distributions: P(ΔLk), the distribution of ΔLk irrespective of K, and P(K). We explored the relationships between these distributions. P(ΔLk, K), P(ΔLk), and P(K|ΔLk) were calculated from the simulated distributions of P(ΔLk|K) and of P(K). The calculated distributions agreed with previous experimental and theoretical results and greatly advanced on them. Our major focus was on P(K|ΔLk), the distribution of knot types for a particular value of ΔLk, which had not been evaluated previously. We found that unknotted circular DNA is not the most probable state beyond small values of ΔLk. Highly chiral knotted DNA has a lower free energy because it has less torsional deformation. Surprisingly, even at |ΔLk| > 12, only one or two knot types dominate the P(K|ΔLk) distribution despite the huge number of knots of comparable complexity. A large fraction of the knots found belong to the small family of torus knots. The relationship between supercoiling and knotting in vivo is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A multi-proxy study including sedimentological, mineralogical, biogeochemical and micropaleontological methods was conducted on sediment core PS69/849-2 retrieved from Burton Basin, MacRobertson Shelf, East Antarctica. The goal of this study was to depict the deglacial and Holocene environmental history of the MacRobertson Land-Prydz Bay region. A special focus was put on the timing of ice-sheet retreat and the variability of bottom-water formation due to sea ice formation through the Holocene. Results from site PS69/849-2 provide the first paleo-environmental record of Holocene variations in bottom-water production probably associated to the Cape Darnley polynya, which is the second largest polynya in the Antarctic. Methods included end-member modeling of laser-derived high-resolution grain size data to reconstruct the depositional regimes and bottom-water activity. The provenance of current-derived and ice-transported material was reconstructed using clay-mineral and heavy-mineral analysis. Conclusions on biogenic production were drawn by determination of biogenic opal and total organic carbon. It was found that the ice shelf front started to retreat from the site around 12.8 ka BP. This coincides with results from other records in Prydz Bay and suggests warming during the early Holocene optimum next to global sea level rise as the main trigger. Ice-rafted debris was then supplied to the site until 5.5 cal. ka BP, when Holocene global sea level rise stabilized and glacial isostatic rebound on MacRobertson Land commenced. Throughout the Holocene, three episodes of enhanced bottom-water activity probably due to elevated brine rejection in Cape Darnley polynya occured between 11.5 and 9 cal. ka BP, 5.6 and 4.5 cal. ka BP and since 1.5 cal. ka BP. These periods are related to shifts from warmer to cooler conditions at the end of Holocene warm periods, in particular the early Holocene optimum, the mid-Holocene warm period and at the beginning of the neoglacial. In contrast, between 7.7 and 6.7 cal. ka BP, brine rejection shut down, maybe owed to warm conditions and pronounced open-water intervals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Past sea-level records provide invaluable information about the response of ice sheets to climate forcing. Some such records suggest that the last deglaciation was punctuated by a dramatic period of sea-level rise, of about 20 metres, in less than 500 years. Controversy about the amplitude and timing of this meltwater pulse (MWP-1A) has, however, led to uncertainty about the source of the melt water and its temporal and causal relationships with the abrupt climate changes of the deglaciation. Here we show that MWP-1A started no earlier than 14,650 years ago and ended before 14,310 years ago, making it coeval with the Bølling warming. Our results, based on corals drilled offshore from Tahiti during Integrated Ocean Drilling Project Expedition 310, reveal that the increase in sea level at Tahiti was between 12 and 22 metres, with a most probable value between 14 and 18 metres, establishing a significant meltwater contribution from the Southern Hemisphere. This implies that the rate of eustatic sea-level rise exceeded 40 millimetres per year during MWP-1A.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on a qualitative and quantitative evaluation of Recent sediments samples (top 3 cm of cores as well as Petersen grab samples) from the Drake Passage, between South America and Antarctica, the distribution of planktonic foraminifera and their relation to oceanographic conditions was investigated. The Antarctic Convergence - the northern limit of the cold Antarctic Surface Water - is shown to be of major importance in controlling the distributional pattern of planktonic species as well as their total numbers. South of the convergence, Globigerina pachyderma is usually the only species found in the sediment. It occurs with abundances not greater than 6000 per gram dry sediment, and at most stations less than 100 specimens per gram of dry sediment were recovered. At a number of deep-sea stations below 3700 m depth approx. no planktonic foraminifera were found at all. It is most probable, that at least some of these stations are located below the limit of CaCO3 dissolution. North of the Antarctic Convergence planktonic foraminiferal numbers are much higher and range from 1800 to 120000 per gram of dry sediment. Eight species are the major constituents of the population: Globigerina pachyderma, Globigerina bulloides, Globogerina quinqueloba, Globigerina inflata, Globorotalia truncatolinoides, Globorotalia scitula, Globigerinita glutinata and Globigerinita uvula. The widespread occurrence of Globorotalia truncatulinoides, which in the northern hemisphere is usually a subtropical form, is especially noteworthy. Another Globigerina, morphologically similar to G. pachyderma, has been recognized frequently north of the Antarctic Convergence. Globigerina megastoma which has its type area in the Drake Passage, has been found only rarely. Orbulina universa occurs in samples from the areas of higher water temperature around the South American Continent. Globigerina pachyderma is predominantly sinistrally coiled throughout the area investigated, but a slight increase in the percentage of dextrally coiled specimens may be noticed with increasing water temperature, i.e. from south to north.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The long-term foetal surveillance is often to be recommended. Hence, the fully non-invasive acoustic recording, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the recorded heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. In this paper, we present a new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings. A filtering is employed as a first step of the algorithm to reduce the background noise. A block for first heart sounds enhancing is then used to further reduce other components of foetal heart sound signals. A complex logic block, guided by a number of rules concerning foetal heart beat regularity, is proposed as a successive block, for the detection of most probable first heart sounds from several candidates. A final block is used for exact first heart sound timing and in turn foetal heart rate estimation. Filtering and enhancing blocks are actually implemented by means of different techniques, so that different processing paths are proposed. Furthermore, a reliability index is introduced to quantify the consistency of the estimated foetal heart rate and, based on statistic parameters; [,] a software quality index is designed to indicate the most reliable analysis procedure (that is, combining the best processing path and the most accurate time mark of the first heart sound, provides the lowest estimation errors). The algorithm performances have been tested on phonocardiographic signals recorded in a local gynaecology private practice from a sample group of about 50 pregnant women. Phonocardiographic signals have been recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by our algorithm and the other provided by cardiotocographic device). Our results show that the proposed algorithm, in particular some analysis procedures, provides reliable foetal heart rate signals, very close to the reference cardiotocographic recordings. © 2010 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bioturbation in marine sediments has basically two aspects of interest for palaeo-environmental studies. First, the traces left by the burrowing organisms reflect the prevailing environmental conditions at the seafloor and thus can be used to reconstruct the ecologic and palaeoceanographic situation. Traces have the advantage over other proxies of practically always being preserved in situ. Secondly, for high- resolution stratigraphy, bioturbation is a nuisance due to the stirring and mixing processes that destroy the stratigraphic record. In order to evaluate the applicability of biogenic traces as palaeoenvironmental indicators, a number of gravity cores from the Portuguese continental slope, covering the period from the last glacial to the present were investigated through X-ray radiographs. In addition, physical and chemical parameters were determined to define the environmental niche in each core interval. A number of traces could be recognized, the most important being: Thalassinoides, Planolites, Zoophycos, Chondrites, Scolicia, Palaeophycus, Phycosiphon and the generally pyritized traces Trichichnus and Mycellia. The shifts between the different ichnofabrics agree strikingly well with the variations in ocean circulation caused by the changing climate. On the upper and middle slope, variations in current intensity and oxygenation of the Mediterranean Outflow Water were responsible for shifts in the ichnofabric. Larger traces such as Planolites and Thalassinoides dominated in coarse, well oxygenated intervals, while small traces such as Chondrites and Trichichnus dominated in fine grained, poorly oxygenated intervals. In contrast, on the lower slope where calm steady sedimentation conditions prevail, changes in sedimentation rate and nutrient flux have controlled variations in the distribution of larger traces such as Planolites, Thalassinoides, and Palaeophycus. Additionally, distinct layers of abundant Chondrites correspond to Heinrich events 1, 2, and 4, and are interpreted as a response to incursions of nutrient rich, oxygen depleted Antarctic waters during phases of reduced thermohaline circulation. The results clearly show that not one single factor but a combination of several factors is necessary to explain the changes in ichnofabric. Furthermore, large variations in the extent and type of bioturbation and tiering between different settings clearly show that a more detailed knowledge of the factors governing bioturbation is necessary if we shall fully comprehend how proxy records are disturbed. A first attempt to automatize a part of the recognition and quantification of the ichnofabric was performed using the DIAna image analysis program on digitized X-ray radiographs. The results show that enhanced abundance of pyritized microburrows appears to be coupled to organic rich sediments deposited under dysoxic conditions. Coarse grained sediments inhibit the formation of pyritized burrows. However, the smallest changes in program settings controlling the grey scale threshold and the sensitivity resulted in large shifts in the number of detected burrows. Therefore, this method can only be considered to be semi-quantitative. Through AMS-^C dating of sample pairs from the Zoophycos spreiten and the surrounding host sediment, age reversals of up to 3,320 years could be demonstrated for the first time. The spreiten material is always several thousands of years younger than the surrounding host sediment. Together with detailed X-ray radiograph studies this shows that the trace maker collects the material on the seafloor, and then transports it downwards up to more than one meter in to the underlying sediment where it is deposited in distinct structures termed spreiten. This clearly shows that age reversals of several thousands of years can be expected whenever Zoophycos is unknowingly sampled. These results also render the hitherto proposed ethological models proposed for Zoophycos as largely implausible. Therefore, a combination of detritus feeding, short time caching, and hibernation possibly combined also with gardening, is suggested here as an explanation for this complicated burrow.