863 resultados para 380206 Language in Time and Space (incl. Historical Linguistics, Dialectology)
Resumo:
Earth's largest reactive carbon pool, marine sedimentary organic matter, becomes increasingly recalcitrant during burial, making it almost inaccessible as a substrate for microorganisms, and thereby limiting metabolic activity in the deep biosphere. Because elevated temperature acting over geological time leads to the massive thermal breakdown of the organic matter into volatiles, including petroleum, the question arises whether microorganisms can directly utilize these maturation products as a substrate. While migrated thermogenic fluids are known to sustain microbial consortia in shallow sediments, an in situ coupling of abiotic generation and microbial utilization has not been demonstrated. Here we show, using a combination of basin modelling, kinetic modelling, geomicrobiology and biogeochemistry, that microorganisms inhabit the active generation zone in the Nankai Trough, offshore Japan. Three sites from ODP Leg 190 have been evaluated, namely 1173, 1174 and 1177, drilled in nearly undeformed Quaternary and Tertiary sedimentary sequences seaward of the Nankai Trough itself. Paleotemperatures were reconstructed based on subsidence profiles, compaction modelling, present-day heat flow, downhole temperature measurements and organic maturity parameters. Today's heat flow distribution can be considered mainly conductive, and is extremely high in places, reaching 180 mW/m**2. The kinetic parameters describing total hydrocarbon generation, determined by laboratory pyrolysis experiments, were utilized by the model in order to predict the timing of generation in time and space. The model predicts that the onset of present day generation lies between 300 and 500 m below sea floor (5100-5300 m below mean sea level), depending on well location. In the case of Site 1174, 5-10% conversion has taken place by a present day temperature of ca. 85 °C. Predictions were largely validated by on-site hydrocarbon gas measurements. Viable organisms in the same depth range have been proven using 14C-radiolabelled substrates for methanogenesis, bacterial cell counts and intact phospholipids. Altogether, these results point to an overlap of abiotic thermal degradation reactions going on in the same part of the sedimentary column as where a deep biosphere exists. The organic matter preserved in Nankai Trough sediments is of the type that generates putative feedstocks for microbial activity, namely oxygenated compounds and hydrocarbons. Furthermore, the rates of thermal degradation calculated from the kinetic model closely resemble rates of respiration and electron donor consumption independently measured in other deep biosphere environments. We deduce that abiotically driven degradation reactions have provided substrates for microbial activity in deep sediments at this convergent continental margin.
Resumo:
This study investigates the landscape evolution and soil development in the loess area near Regensburg between approximately 6000-2000 yr BP (radiocarbon years), Eastern Bavaria. The focus is on the question how man and climate influenced landscape evolution and what their relative significance was. The theoretical background concerning the factors that controlled prehistoric soil erosion in Middle Europe is summarized with respect to rainfall intensity and distribution, pedogenesis, Pleistocene relief, and prehistoric farming. Colluvial deposits , flood loams, and soils were studied at ten different and representative sites that served as archives of their respective palaeoenvironments. Geomorphological, sedimentological, and pedological methods were applied. According to the findings presented here, there was a high asynchronity of landscape evolution in the investigation area, which was due to prehistoric land-use patterns. Prehistoric land use and settlement caused highly difIerenciated phases of morphodynamic activity and stability in time and space. These are documented at the single catenas ofeach site. In general, Pleistocene relief was substantially lowered. At the same time smaller landforms such as dells and minor asymmetric valleys filled up and strongly transformed. However, there were short phases at many sites, forming short lived linear erosion features ('Runsen'), resulting from exceptional rainfalls. These forms are results of single events without showing regional trends. Generally, the onset of the sedimentation of colluvial deposits took place much earlier (usually 3500 yr BP (radiocarbon) and younger) than the formation of flood loams. Thus, the deposition of flood loams in the Kleine Laaber river valley started mainly as a consequence of iron age farming only at around 2500 yr BP (radiocarbon). A cascade system explains the different ages of colluvial deposits and flood loams: as a result of prehistoric land use, dells and other minor Pleistocene landforms were filled with colluvial sediments. After the filling of these primary sediment traps , eroded material was transported into flood plains, thus forming flood loams. But at the moment we cannot quantify the extent ofprehistoric soil erosion in the investigation area. The three factors that controlled the prehistoric Iandscapc evolution in the Ioess area near Regensburg are as follows: 1. The transformation from a natural to a prehistoric cultural landscape was the most important factor: A landscape with stable relief was changed into a highly morphodynamic one with soil erosion as the dominant process of this change. 2. The sediment traps of the pre-anthropogenic relief determined where the material originated from soil erosion was deposited: either sedimentation took place on the slopes or the filled sediment traps of the slopes rendered flood loam formation possible. Climatic influence of any importance can only be documented as the result of land use in connection with singular and/or statistic events of heavy rainfalls. Without human impact, no significant change in the Holocene landscape would have been possible.
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
In order to properly understand and model the gene regulatory networks in animals development, it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains. In this paper, we propose a complete computational framework to fulfill this task and create a 3D Atlas of the early zebrafish embryogenesis annotated with both the cellular localizations and the level of expression of different genes at different developmental stages. The strategy to construct such an Atlas is described here with the expression pattern of 5 different genes at 6 hours of development post fertilization.
Resumo:
Digital atlases of animal development provide a quantitative description of morphogenesis, opening the path toward processes modeling. Prototypic atlases offer a data integration framework where to gather information from cohorts of individuals with phenotypic variability. Relevant information for further theoretical reconstruction includes measurements in time and space for cell behaviors and gene expression. The latter as well as data integration in a prototypic model, rely on image processing strategies. Developing the tools to integrate and analyze biological multidimensional data are highly relevant for assessing chemical toxicity or performing drugs preclinical testing. This article surveys some of the most prominent efforts to assemble these prototypes, categorizes them according to salient criteria and discusses the key questions in the field and the future challenges toward the reconstruction of multiscale dynamics in model organisms.
Resumo:
La región del espectro electromagnético comprendida entre 100 GHz y 10 THz alberga una gran variedad de aplicaciones en campos tan dispares como la radioastronomía, espectroscopíamolecular, medicina, seguridad, radar, etc. Los principales inconvenientes en el desarrollo de estas aplicaciones son los altos costes de producción de los sistemas trabajando a estas frecuencias, su costoso mantenimiento, gran volumen y baja fiabilidad. Entre las diferentes tecnologías a frecuencias de THz, la tecnología de los diodos Schottky juega un importante papel debido a su madurez y a la sencillez de estos dispositivos. Además, los diodos Schottky pueden operar tanto a temperatura ambiente como a temperaturas criogénicas, con altas eficiencias cuando se usan como multiplicadores y con moderadas temperaturas de ruido en mezcladores. El principal objetivo de esta tesis doctoral es analizar los fenómenos físicos responsables de las características eléctricas y del ruido en los diodos Schottky, así como analizar y diseñar circuitos multiplicadores y mezcladores en bandas milimétricas y submilimétricas. La primera parte de la tesis presenta un análisis de los fenómenos físicos que limitan el comportamiento de los diodos Schottky de GaAs y GaN y de las características del espectro de ruido de estos dispositivos. Para llevar a cabo este análisis, un modelo del diodo basado en la técnica de Monte Carlo se ha considerado como referencia debido a la elevada precisión y fiabilidad de este modelo. Además, el modelo de Monte Carlo permite calcular directamente el espectro de ruido de los diodos sin necesidad de utilizar ningún modelo analítico o empírico. Se han analizado fenómenos físicos como saturación de la velocidad, inercia de los portadores, dependencia de la movilidad electrónica con la longitud de la epicapa, resonancias del plasma y efectos no locales y no estacionarios. También se ha presentado un completo análisis del espectro de ruido para diodos Schottky de GaAs y GaN operando tanto en condiciones estáticas como variables con el tiempo. Los resultados obtenidos en esta parte de la tesis contribuyen a mejorar la comprensión de la respuesta eléctrica y del ruido de los diodos Schottky en condiciones de altas frecuencias y/o altos campos eléctricos. También, estos resultados han ayudado a determinar las limitaciones de modelos numéricos y analíticos usados en el análisis de la respuesta eléctrica y del ruido electrónico en los diodos Schottky. La segunda parte de la tesis está dedicada al análisis de multiplicadores y mezcladores mediante una herramienta de simulación de circuitos basada en la técnica de balance armónico. Diferentes modelos basados en circuitos equivalentes del dispositivo, en las ecuaciones de arrastre-difusión y en la técnica de Monte Carlo se han considerado en este análisis. El modelo de Monte Carlo acoplado a la técnica de balance armónico se ha usado como referencia para evaluar las limitaciones y el rango de validez de modelos basados en circuitos equivalentes y en las ecuaciones de arrastredifusión para el diseño de circuitos multiplicadores y mezcladores. Una notable característica de esta herramienta de simulación es que permite diseñar circuitos Schottky teniendo en cuenta tanto la respuesta eléctrica como el ruido generado en los dispositivos. Los resultados de las simulaciones presentados en esta parte de la tesis, tanto paramultiplicadores comomezcladores, se han comparado con resultados experimentales publicados en la literatura. El simulador que integra el modelo de Monte Carlo con la técnica de balance armónico permite analizar y diseñar circuitos a frecuencias superiores a 1 THz. ABSTRACT The terahertz region of the electromagnetic spectrum(100 GHz-10 THz) presents a wide range of applications such as radio-astronomy, molecular spectroscopy, medicine, security and radar, among others. The main obstacles for the development of these applications are the high production cost of the systems working at these frequencies, highmaintenance, high volume and low reliability. Among the different THz technologies, Schottky technology plays an important rule due to its maturity and the inherent simplicity of these devices. Besides, Schottky diodes can operate at both room and cryogenic temperatures, with high efficiency in multipliers and moderate noise temperature in mixers. This PhD. thesis is mainly concerned with the analysis of the physical processes responsible for the characteristics of the electrical response and noise of Schottky diodes, as well as the analysis and design of frequency multipliers and mixers at millimeter and submillimeter wavelengths. The first part of the thesis deals with the analysis of the physical phenomena limiting the electrical performance of GaAs and GaN Schottky diodes and their noise performance. To carry out this analysis, a Monte Carlo model of the diode has been used as a reference due to the high accuracy and reliability of this diode model at millimeter and submillimter wavelengths. Besides, the Monte Carlo model provides a direct description of the noise spectra of the devices without the necessity of any additional analytical or empirical model. Physical phenomena like velocity saturation, carrier inertia, dependence of the electron mobility on the epilayer length, plasma resonance and nonlocal effects in time and space have been analysed. Also, a complete analysis of the current noise spectra of GaAs and GaN Schottky diodes operating under static and time varying conditions is presented in this part of the thesis. The obtained results provide a better understanding of the electrical and the noise responses of Schottky diodes under high frequency and/or high electric field conditions. Also these results have helped to determine the limitations of numerical and analytical models used in the analysis of the electrical and the noise responses of these devices. The second part of the thesis is devoted to the analysis of frequency multipliers and mixers by means of an in-house circuit simulation tool based on the harmonic balance technique. Different lumped equivalent circuits, drift-diffusion and Monte Carlo models have been considered in this analysis. The Monte Carlo model coupled to the harmonic balance technique has been used as a reference to evaluate the limitations and range of validity of lumped equivalent circuit and driftdiffusion models for the design of frequency multipliers and mixers. A remarkable feature of this reference simulation tool is that it enables the design of Schottky circuits from both electrical and noise considerations. The simulation results presented in this part of the thesis for both multipliers and mixers have been compared with measured results available in the literature. In addition, the Monte Carlo simulation tool allows the analysis and design of circuits above 1 THz.
Resumo:
Wheat (Triticum aestivum L.), rice (Oryza sativa L.), and maize (Zea mays L.) provide about two-thirds of all energy in human diets, and four major cropping systems in which these cereals are grown represent the foundation of human food supply. Yield per unit time and land has increased markedly during the past 30 years in these systems, a result of intensified crop management involving improved germplasm, greater inputs of fertilizer, production of two or more crops per year on the same piece of land, and irrigation. Meeting future food demand while minimizing expansion of cultivated area primarily will depend on continued intensification of these same four systems. The manner in which further intensification is achieved, however, will differ markedly from the past because the exploitable gap between average farm yields and genetic yield potential is closing. At present, the rate of increase in yield potential is much less than the expected increase in demand. Hence, average farm yields must reach 70–80% of the yield potential ceiling within 30 years in each of these major cereal systems. Achieving consistent production at these high levels without causing environmental damage requires improvements in soil quality and precise management of all production factors in time and space. The scope of the scientific challenge related to these objectives is discussed. It is concluded that major scientific breakthroughs must occur in basic plant physiology, ecophysiology, agroecology, and soil science to achieve the ecological intensification that is needed to meet the expected increase in food demand.
Resumo:
Rapid progress in effective methods to image brain functions has revolutionized neuroscience. It is now possible to study noninvasively in humans neural processes that were previously only accessible in experimental animals and in brain-injured patients. In this endeavor, positron emission tomography has been the leader, but the superconducting quantum interference device-based magnetoencephalography (MEG) is gaining a firm role, too. With the advent of instruments covering the whole scalp, MEG, typically with 5-mm spatial and 1-ms temporal resolution, allows neuroscientists to track cortical functions accurately in time and space. We present five representative examples of recent MEG studies in our laboratory that demonstrate the usefulness of whole-head magnetoencephalography in investigations of spatiotemporal dynamics of cortical signal processing.
Resumo:
The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks.
Resumo:
As a measure of dynamical structure, short-term fluctuations of coherence between 0.3 and 100 Hz in the electroencephalogram (EEG) of humans were studied from recordings made by chronic subdural macroelectrodes 5-10 mm apart, on temporal, frontal, and parietal lobes, and from intracranial probes deep in the temporal lobe, including the hippocampus, during sleep, alert, and seizure states. The time series of coherence between adjacent sites calculated every second or less often varies widely in stability over time; sometimes it is stable for half a minute or more. Within 2-min samples, coherence commonly fluctuates by a factor up to 2-3, in all bands, within the time scale of seconds to tens of seconds. The power spectrum of the time series of these fluctuations is broad, extending to 0.02 Hz or slower, and is weighted toward the slower frequencies; little power is faster than 0.5 Hz. Some records show conspicuous swings with a preferred duration of 5-15s, either irregularly or quasirhythmically with a broad peak around 0.1 Hz. Periodicity is not statistically significant in most records. In our sampling, we have not found a consistent difference between lobes of the brain, subdural and depth electrodes, or sleeping and waking states. Seizures generally raise the mean coherence in all frequencies and may reduce the fluctuations by a ceiling effect. The coherence time series of different bands is positively correlated (0.45 overall); significant nonindependence extends for at least two octaves. Coherence fluctuations are quite local; the time series of adjacent electrodes is correlated with that of the nearest neighbor pairs (10 mm) to a coefficient averaging approximately 0.4, falling to approximately 0.2 for neighbors-but-one (20 mm) and to < 0.1 for neighbors-but-two (30 mm). The evidence indicates fine structure in time and space, a dynamic and local determination of this measure of cooperativity. Widely separated frequencies tending to fluctuate together exclude independent oscillators as the general or usual basis of the EEG, although a few rhythms are well known under special conditions. Broad-band events may be the more usual generators. Loci only a few millimeters apart can fluctuate widely in seconds, either in parallel or independently. Scalp EEG coherence cannot be predicted from subdural or deep recordings, or vice versa, and intracortical microelectrodes show still greater coherence fluctuation in space and time. Widely used computations of chaos and dimensionality made upon data from scalp or even subdural or depth electrodes, even when reproducible in successive samples, cannot be considered representative of the brain or the given structure or brain state but only of the scale or view (receptive field) of the electrodes used. Relevant to the evolution of more complex brains, which is an outstanding fact of animal evolution, we believe that measures of cooperativity are likely to be among the dynamic features by which major evolutionary grades of brains differ.
Resumo:
The chemical and mineralogical composition of pelagic sediments from the East Pacific Ocean has been determined with the aim of defining the ultimate sources and the mechanisms of formation of the solid phases. The distribution of elements between sea-water, the pore solution and the various solid components of the sediments permits interpretations of the variations in time and space of the gross chemical composition of pelagic clays. For example, manganese, present in sea-water in a divalent form, is apparently oxidized at the sediment-water interface to tetravalent species which subsequently become a part of the group of ferromanganese oxide minerals which are found in the marine environment. It is suggested the rate of manganese accumulation in sediments is some function of the length of time the sediment surface is in contact with sea-water. The contribution of chemical species from the different geospheres is considered. The quantitative importance of pelagic clays in the major sedimentary cycle is studied on the basis of the distribution of the weathered igneous rock products between continental and pelagic deposits and sea-water. These analyses of a wide variety of pelagic clays allow a reformulation of the geochemical balance and it is concluded that pelagic clays account for approximately 13 per cent of the total mass of sediments produced over geologic time.