959 resultados para time domain windowing
Resumo:
The marine laboratories in Plymouth have sampled at two principle sites in the Western English Channel for over a century in open-shelf (station E1; 50° 02'N, 4° 22'W) and coastal (station L4; 50° 15'N, 4° 13'W) waters. These stations are seasonally stratified from late-April until September, and the variable biological response is regulated by subtle variations in temperature, light, nutrients and meteorology. Station L4 is characterized by summer nutrient depletion, although intense summer precipitation, increasing riverine input to the system, results in pulses of increased nitrate concentration and surface freshening. The winter nutrient concentrations at E1 are consistent with an open-shelf site. Both stations have a spring and autumn phytoplankton bloom; at station E1, the autumn bloom tends to dominate in terms of chlorophyll concentration. The last two decades have seen a warming of around 0.6°C per decade, and this is superimposed on several periods of warming and cooling over the past century. In general, over the Western English Channel domain, the end of the 20th century was around 0.5°C warmer than the first half of the century. The warming magnitude and trend is consistent with other stations across the north-west European Shelf and occurred during a period of reduced wind stress and increased levels of insolation (+20%); these are both correlated with the larger scale climatic forcing of the North Atlantic Oscillation.
Resumo:
The marine laboratories in Plymouth have sampled at two principle sites in the Western English Channel for over a century in open-shelf (station E1; 50° 02'N, 4° 22'W) and coastal (station L4; 50° 15'N, 4° 13'W) waters. These stations are seasonally stratified from late-April until September, and the variable biological response is regulated by subtle variations in temperature, light, nutrients and meteorology. Station L4 is characterized by summer nutrient depletion, although intense summer precipitation, increasing riverine input to the system, results in pulses of increased nitrate concentration and surface freshening. The winter nutrient concentrations at E1 are consistent with an open-shelf site. Both stations have a spring and autumn phytoplankton bloom; at station E1, the autumn bloom tends to dominate in terms of chlorophyll concentration. The last two decades have seen a warming of around 0.6°C per decade, and this is superimposed on several periods of warming and cooling over the past century. In general, over the Western English Channel domain, the end of the 20th century was around 0.5°C warmer than the first half of the century. The warming magnitude and trend is consistent with other stations across the north-west European Shelf and occurred during a period of reduced wind stress and increased levels of insolation (+20%); these are both correlated with the larger scale climatic forcing of the North Atlantic Oscillation.
Resumo:
The evolution of water content on a sandy soil during the sprinkler irrigation campaign, in the summer of 2010, of a field of sugar beet crop located at Valladolid (Spain) is assessed by a capacitive FDR (Frequency Domain Reflectometry) EnviroScan. This field is one of the experimental sites of the Spanish research center for the sugar beet development (AIMCRA). The objective of the work focus on monitoring the soil water content evolution of consecutive irrigations during the second two weeks of July (from the 12th to the 28th). These measurements will be used to simulate water movement by means of Hydrus-2D. The water probe logged water content readings (m3/m3) at 10, 20, 40 and 60 cm depth every 30 minutes. The probe was placed between two rows in one of the typical 12 x 15 m sprinkler irrigation framework. Furthermore, a texture analysis at the soil profile was also conducted. The irrigation frequency in this farm was set by the own personal farmer 0 s criteria that aiming to minimizing electricity pumping costs, used to irrigate at night and during the weekend i.e. longer irrigation frequency than expected. However, the high evapotranspiration rates and the weekly sugar beet water consumption—up to 50mm/week—clearly determined the need for lower this frequency. Moreover, farmer used to irrigate for six or five hours whilst results from the EnviroScan probe showed the soil profile reaching saturation point after the first three hours. It must be noted that AIMCRA provides to his members with a SMS service regarding weekly sugar beet water requirement; from the use of different meteorological stations and evapotranspiration pans, farmers have an idea of the weekly irrigation needs. Nevertheless, it is the farmer 0 s decision to decide how to irrigate. Thus, in order to minimize water stress and pumping costs, a suitable irrigation time and irrigation frequency was modeled with Hydrus-2D. Results for the period above mentioned showed values of water content ranging from 35 and 30 (m3/m3) for the first 10 and 20cm profile depth (two hours after irrigation) to the minimum 14 and 13 (m3/m3) ( two hours before irrigation). For the 40 and 60 cm profile depth, water content moves steadily across the dates: The greater the root activity the greater the water content variation. According to the results in the EnviroScan probe and the modeling in Hydrus-2D, shorter frequencies and irrigation times are suggested.
Resumo:
Traditional schemes for abstract interpretation-based global analysis of logic programs generally focus on obtaining procedure argument mode and type information. Variable sharing information is often given only the attention needed to preserve the correctness of the analysis. However, such sharing information can be very useful. In particular, it can be used for predicting runtime goal independence, which can eliminate costly run-time checks in and-parallel execution. In this paper, a new algorithm for doing abstract interpretation in logic programs is described which concentrates on inferring the dependencies of the terms bound to program variables with increased precisión and at all points in the execution of the program, rather than just at a procedure level. Algorithms are presented for computing abstract entry and success substitutions which extensively keep track of variable aliasing and term dependence information. In addition, a new, abstract domain independent ñxpoint algorithm is presented and described in detail. The algorithms are illustrated with examples. Finally, results from an implementation of the abstract interpreter are presented.
Resumo:
This paper discusses some issues which arise in the dataflow analysis of constraint logic programming (CLP) languages. The basic technique applied is that of abstract interpretation. First, some types of optimizations possible in a number of CLP systems (including efficient parallelization) are presented and the information that has to be obtained at compile-time in order to be able to implement such optimizations is considered. Two approaches are then proposed and discussed for obtaining this information for a CLP program: one based on an analysis of a CLP metainterpreter using standard Prolog analysis tools, and a second one based on direct analysis of the CLP program. For the second approach an abstract domain which approximates groundness (also referred to as "definiteness") information (i.e. constraint to a single valué) and the related abstraction functions are presented.
Resumo:
We study the first passage statistics to adsorbing boundaries of a Brownian motion in bounded two-dimensional domains of different shapes and configurations of the adsorbing and reflecting boundaries. From extensive numerical analysis we obtain the probability P(ω) distribution of the random variable ω=τ1/(τ1+τ2), which is a measure for how similar the first passage times τ1 and τ2 are of two independent realizations of a Brownian walk starting at the same location. We construct a chart for each domain, determining whether P(ω) represents a unimodal, bell-shaped form, or a bimodal, M-shaped behavior. While in the former case the mean first passage time (MFPT) is a valid characteristic of the first passage behavior, in the latter case it is an insufficient measure for the process. Strikingly we find a distinct turnover between the two modes of P(ω), characteristic for the domain shape and the respective location of absorbing and reflective boundaries. Our results demonstrate that large fluctuations of the first passage times may occur frequently in two-dimensional domains, rendering quite vague the general use of the MFPT as a robust measure of the actual behavior even in bounded domains, in which all moments of the first passage distribution exist.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Partitioning is a common approach to developing mixed-criticality systems, where partitions are isolated from each other both in the temporal and the spatial domain in order to prevent low-criticality subsystems from compromising other subsystems with high level of criticality in case of misbehaviour. The advent of many-core processors, on the other hand, opens the way to highly parallel systems in which all partitions can be allocated to dedicated processor cores. This trend will simplify processor scheduling, although other issues such as mutual interference in the temporal domain may arise as a consequence of memory and device sharing. The paper describes an architecture for multi-core partitioned systems including critical subsystems built with the Ada Ravenscar profile. Some implementation issues are discussed, and experience on implementing the ORK kernel on the XtratuM partitioning hypervisor is presented.
Resumo:
Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation.
Resumo:
We present a framework specially designed to deal with structurally complex data, where all individuals have the same structure, as is the case in many medical domains. A structurally complex individual may be composed of any type of singlevalued or multivalued attributes, including time series, for example. These attributes are structured according to domain-dependent hierarchies. Our aim is to generate reference models of population groups. These models represent the population archetype and are very useful for supporting such important tasks as diagnosis, detecting fraud, analyzing patient evolution, identifying control groups, etc.
Resumo:
This article describes a knowledge-based application in the domain of road traffic management that we have developed following a knowledge modeling approach and the notion of problem-solving method. The article presents first a domain-independent model for real-time decision support as a structured collection of problem solving methods. Then, it is described how this general model is used to develop an operational version for the domain of traffic management. For this purpose, a particular knowledge modeling tool, called KSM (Knowledge Structure Manager), was applied. Finally, the article shows an application developed for a traffic network of the city of Madrid and it is compared with a second application developed for a different traffic area of the city of Barcelona.
Resumo:
El objetivo de esta tesis es estudiar la dinámica de la capa logarítmica de flujos turbulentos de pared. En concreto, proponemos un nuevo modelo estructural utilizando diferentes tipos de estructuras coherentes: sweeps, eyecciones, grupos de vorticidad y streaks. La herramienta utilizada es la simulación numérica directa de canales turbulentos. Desde los primeros trabajos de Theodorsen (1952), las estructuras coherentes han jugado un papel fundamental para entender la organización y dinámica de los flujos turbulentos. A día de hoy, datos procedentes de simulaciones numéricas directas obtenidas en instantes no contiguos permiten estudiar las propiedades fundamentales de las estructuras coherentes tridimensionales desde un punto de vista estadístico. Sin embargo, la dinámica no puede ser entendida en detalle utilizando sólo instantes aislados en el tiempo, sino que es necesario seguir de forma continua las estructuras. Aunque existen algunos estudios sobre la evolución temporal de las estructuras más pequeñas a números de Reynolds moderados, por ejemplo Robinson (1991), todavía no se ha realizado un estudio completo a altos números de Reynolds y para todas las escalas presentes de la capa logarítmica. El objetivo de esta tesis es llevar a cabo dicho análisis. Los problemas más interesantes los encontramos en la región logarítmica, donde residen las cascadas de vorticidad, energía y momento. Existen varios modelos que intentan explicar la organización de los flujos turbulentos en dicha región. Uno de los más extendidos fue propuesto por Adrian et al. (2000) a través de observaciones experimentales y considerando como elemento fundamental paquetes de vórtices con forma de horquilla que actúan de forma cooperativa para generar rampas de bajo momento. Un modelo alternativo fué ideado por del Álamo & Jiménez (2006) utilizando datos numéricos. Basado también en grupos de vorticidad, planteaba un escenario mucho más desorganizado y con estructuras sin forma de horquilla. Aunque los dos modelos son cinemáticamente similares, no lo son desde el punto de vista dinámico, en concreto en lo que se refiere a la importancia que juega la pared en la creación y vida de las estructuras. Otro punto importante aún sin resolver se refiere al modelo de cascada turbulenta propuesto por Kolmogorov (1941b), y su relación con estructuras coherentes medibles en el flujo. Para dar respuesta a las preguntas anteriores, hemos desarrollado un nuevo método que permite seguir estructuras coherentes en el tiempo y lo hemos aplicado a simulaciones numéricas de canales turbulentos con números de Reynolds lo suficientemente altos como para tener un rango de escalas no trivial y con dominios computacionales lo suficientemente grandes como para representar de forma correcta la dinámica de la capa logarítmica. Nuestros esfuerzos se han desarrollado en cuatro pasos. En primer lugar, hemos realizado una campaña de simulaciones numéricas directas a diferentes números de Reynolds y tamaños de cajas para evaluar el efecto del dominio computacional en las estadísticas de primer orden y el espectro. A partir de los resultados obtenidos, hemos concluido que simulaciones con cajas de longitud 2vr y ancho vr veces la semi-altura del canal son lo suficientemente grandes para reproducir correctamente las interacciones entre estructuras coherentes de la capa logarítmica y el resto de escalas. Estas simulaciones son utilizadas como punto de partida en los siguientes análisis. En segundo lugar, las estructuras coherentes correspondientes a regiones con esfuerzos de Reynolds tangenciales intensos (Qs) en un canal turbulento han sido estudiadas extendiendo a tres dimensiones el análisis de cuadrantes, con especial énfasis en la capa logarítmica y la región exterior. Las estructuras coherentes han sido identificadas como regiones contiguas del espacio donde los esfuerzos de Reynolds tangenciales son más intensos que un cierto nivel. Los resultados muestran que los Qs separados de la pared están orientados de forma isótropa y su contribución neta al esfuerzo de Reynolds medio es nula. La mayor contribución la realiza una familia de estructuras de mayor tamaño y autosemejantes cuya parte inferior está muy cerca de la pared (ligada a la pared), con una geometría compleja y dimensión fractal « 2. Estas estructuras tienen una forma similar a una ‘esponja de placas’, en comparación con los grupos de vorticidad que tienen forma de ‘esponja de cuerdas’. Aunque el número de objetos decae al alejarnos de la pared, la fracción de esfuerzos de Reynolds que contienen es independiente de su altura, y gran parte reside en unas pocas estructuras que se extienden más allá del centro del canal, como en las grandes estructuras propuestas por otros autores. Las estructuras dominantes en la capa logarítmica son parejas de sweeps y eyecciones uno al lado del otro y con grupos de vorticidad asociados que comparten las dimensiones y esfuerzos con los remolinos ligados a la pared propuestos por Townsend. En tercer lugar, hemos estudiado la evolución temporal de Qs y grupos de vorticidad usando las simulaciones numéricas directas presentadas anteriormente hasta números de Reynolds ReT = 4200 (Reynolds de fricción). Las estructuras fueron identificadas siguiendo el proceso descrito en el párrafo anterior y después seguidas en el tiempo. A través de la interseción geométrica de estructuras pertenecientes a instantes de tiempo contiguos, hemos creado gratos de conexiones temporales entre todos los objetos y, a partir de ahí, definido ramas primarias y secundarias, de tal forma que cada rama representa la evolución temporal de una estructura coherente. Una vez que las evoluciones están adecuadamente organizadas, proporcionan toda la información necesaria para caracterizar la historia de las estructuras desde su nacimiento hasta su muerte. Los resultados muestran que las estructuras nacen a todas las distancias de la pared, pero con mayor probabilidad cerca de ella, donde la cortadura es más intensa. La mayoría mantienen tamaños pequeños y no viven mucho tiempo, sin embargo, existe una familia de estructuras que crecen lo suficiente como para ligarse a la pared y extenderse a lo largo de la capa logarítmica convirtiéndose en las estructuras observas anteriormente y descritas por Townsend. Estas estructuras son geométricamente autosemejantes con tiempos de vida proporcionales a su tamaño. La mayoría alcanzan tamaños por encima de la escala de Corrsin, y por ello, su dinámica está controlada por la cortadura media. Los resultados también muestran que las eyecciones se alejan de la pared con velocidad media uT (velocidad de fricción) y su base se liga a la pared muy rápidamente al inicio de sus vidas. Por el contrario, los sweeps se mueven hacia la pared con velocidad -uT y se ligan a ella más tarde. En ambos casos, los objetos permanecen ligados a la pared durante 2/3 de sus vidas. En la dirección de la corriente, las estructuras se desplazan a velocidades cercanas a la convección media del flujo y son deformadas por la cortadura. Finalmente, hemos interpretado la cascada turbulenta, no sólo como una forma conceptual de organizar el flujo, sino como un proceso físico en el cual las estructuras coherentes se unen y se rompen. El volumen de una estructura cambia de forma suave, cuando no se une ni rompe, o lo hace de forma repentina en caso contrario. Los procesos de unión y rotura pueden entenderse como una cascada directa (roturas) o inversa (uniones), siguiendo el concepto de cascada de remolinos ideado por Richardson (1920) y Obukhov (1941). El análisis de los datos muestra que las estructuras con tamaños menores a 30η (unidades de Kolmogorov) nunca se unen ni rompen, es decir, no experimentan el proceso de cascada. Por el contrario, aquellas mayores a 100η siempre se rompen o unen al menos una vez en su vida. En estos casos, el volumen total ganado y perdido es una fracción importante del volumen medio de la estructura implicada, con una tendencia ligeramente mayor a romperse (cascada directa) que a unirse (cascade inversa). La mayor parte de interacciones entre ramas se debe a roturas o uniones de fragmentos muy pequeños en la escala de Kolmogorov con estructuras más grandes, aunque el efecto de fragmentos de mayor tamaño no es despreciable. También hemos encontrado que las roturas tienen a ocurrir al final de la vida de la estructura y las uniones al principio. Aunque los resultados para la cascada directa e inversa no son idénticos, son muy simétricos, lo que sugiere un alto grado de reversibilidad en el proceso de cascada. ABSTRACT The purpose of the present thesis is to study the dynamics of the logarithmic layer of wall-bounded turbulent flows. Specifically, to propose a new structural model based on four different coherent structures: sweeps, ejections, clusters of vortices and velocity streaks. The tool used is the direct numerical simulation of time-resolved turbulent channels. Since the first work by Theodorsen (1952), coherent structures have played an important role in the understanding of turbulence organization and its dynamics. Nowadays, data from individual snapshots of direct numerical simulations allow to study the threedimensional statistical properties of those objects, but their dynamics can only be fully understood by tracking them in time. Although the temporal evolution has already been studied for small structures at moderate Reynolds numbers, e.g., Robinson (1991), a temporal analysis of three-dimensional structures spanning from the smallest to the largest scales across the logarithmic layer has yet to be performed and is the goal of the present thesis. The most interesting problems lie in the logarithmic region, which is the seat of cascades of vorticity, energy, and momentum. Different models involving coherent structures have been proposed to represent the organization of wall-bounded turbulent flows in the logarithmic layer. One of the most extended ones was conceived by Adrian et al. (2000) and built on packets of hairpins that grow from the wall and work cooperatively to gen- ´ erate low-momentum ramps. A different view was presented by del Alamo & Jim´enez (2006), who extracted coherent vortical structures from DNSs and proposed a less organized scenario. Although the two models are kinematically fairly similar, they have important dynamical differences, mostly regarding the relevance of the wall. Another open question is whether such a model can be used to explain the cascade process proposed by Kolmogorov (1941b) in terms of coherent structures. The challenge would be to identify coherent structures undergoing a turbulent cascade that can be quantified. To gain a better insight into the previous questions, we have developed a novel method to track coherent structures in time, and used it to characterize the temporal evolutions of eddies in turbulent channels with Reynolds numbers high enough to include a non-trivial range of length scales, and computational domains sufficiently long and wide to reproduce correctly the dynamics of the logarithmic layer. Our efforts have followed four steps. First, we have conducted a campaign of direct numerical simulations of turbulent channels at different Reynolds numbers and box sizes, and assessed the effect of the computational domain in the one-point statistics and spectra. From the results, we have concluded that computational domains with streamwise and spanwise sizes 2vr and vr times the half-height of the channel, respectively, are large enough to accurately capture the dynamical interactions between structures in the logarithmic layer and the rest of the scales. These simulations are used in the subsequent chapters. Second, the three-dimensional structures of intense tangential Reynolds stress in plane turbulent channels (Qs) have been studied by extending the classical quadrant analysis to three dimensions, with emphasis on the logarithmic and outer layers. The eddies are identified as connected regions of intense tangential Reynolds stress. Qs are then classified according to their streamwise and wall-normal fluctuating velocities as inward interactions, outward interactions, sweeps and ejections. It is found that wall-detached Qs are isotropically oriented background stress fluctuations, common to most turbulent flows, and do not contribute to the mean stress. Most of the stress is carried by a selfsimilar family of larger wall-attached Qs, increasingly complex away from the wall, with fractal dimensions « 2. They have shapes similar to ‘sponges of flakes’, while vortex clusters resemble ‘sponges of strings’. Although their number decays away from the wall, the fraction of the stress that they carry is independent of their heights, and a substantial part resides in a few objects extending beyond the centerline, reminiscent of the very large scale motions of several authors. The predominant logarithmic-layer structures are sideby- side pairs of sweeps and ejections, with an associated vortex cluster, and dimensions and stresses similar to Townsend’s conjectured wall-attached eddies. Third, the temporal evolution of Qs and vortex clusters are studied using time-resolved DNS data up to ReT = 4200 (friction Reynolds number). The eddies are identified following the procedure presented above, and then tracked in time. From the geometric intersection of structures in consecutive fields, we have built temporal connection graphs of all the objects, and defined main and secondary branches in a way that each branch represents the temporal evolution of one coherent structure. Once these evolutions are properly organized, they provide the necessary information to characterize eddies from birth to death. The results show that the eddies are born at all distances from the wall, although with higher probability near it, where the shear is strongest. Most of them stay small and do not last for long times. However, there is a family of eddies that become large enough to attach to the wall while they reach into the logarithmic layer, and become the wall-attached structures previously observed in instantaneous flow fields. They are geometrically self-similar, with sizes and lifetimes proportional to their distance from the wall. Most of them achieve lengths well above the Corrsin’ scale, and hence, their dynamics are controlled by the mean shear. Eddies associated with ejections move away from the wall with an average velocity uT (friction velocity), and their base attaches very fast at the beginning of their lives. Conversely, sweeps move towards the wall at -uT, and attach later. In both cases, they remain attached for 2/3 of their lives. In the streamwise direction, eddies are advected and deformed by the local mean velocity. Finally, we interpret the turbulent cascade not only as a way to conceptualize the flow, but as an actual physical process in which coherent structures merge and split. The volume of an eddy can change either smoothly, when they are not merging or splitting, or through sudden changes. The processes of merging and splitting can be thought of as a direct (when splitting) or an inverse (when merging) cascade, following the ideas envisioned by Richardson (1920) and Obukhov (1941). It is observed that there is a minimum length of 30η (Kolmogorov units) above which mergers and splits begin to be important. Moreover, all eddies above 100η split and merge at least once in their lives. In those cases, the total volume gained and lost is a substantial fraction of the average volume of the structure involved, with slightly more splits (direct cascade) than mergers. Most branch interactions are found to be the shedding or absorption of Kolmogorov-scale fragments by larger structures, but more balanced splits or mergers spanning a wide range of scales are also found to be important. The results show that splits are more probable at the end of the life of the eddy, while mergers take place at the beginning of the life. Although the results for the direct and the inverse cascades are not identical, they are found to be very symmetric, which suggests a high degree of reversibility of the cascade process.
Resumo:
A protease-resistant core domain of the neuronal SNARE complex consists of an α-helical bundle similar to the proposed fusogenic core of viral fusion proteins [Skehel, J. J. & Wiley, D. C. (1998) Cell 95, 871–874]. We find that the isolated core of a SNARE complex efficiently fuses artificial bilayers and does so faster than full length SNAREs. Unexpectedly, a dramatic increase in speed results from removal of the N-terminal domain of the t-SNARE syntaxin, which does not affect the rate of assembly of v-t SNARES. In the absence of this negative regulatory domain, the half-time for fusion of an entire population of lipid vesicles by isolated SNARE cores (≈10 min) is compatible with the kinetics of fusion in many cell types.
Resumo:
A cell’s ability to effectively communicate with a neighboring cell is essential for tissue function and ultimately for the organism to which it belongs. One important mode of intercellular communication is the release of soluble cyto- and chemokines. Once secreted, these signaling molecules diffuse through the surrounding medium and eventually bind to neighboring cell’s receptors whereby the signal is received. This mode of communication is governed both by physicochemical transport processes and cellular secretion rates, which in turn are determined by genetic and biochemical processes. The characteristics of transport processes have been known for some time, and information on the genetic and biochemical determinants of cellular function is rapidly growing. Simultaneous quantitative analysis of the two is required to systematically evaluate the nature and limitations of intercellular signaling. The present study uses a solitary cell model to estimate effective communication distances over which a single cell can meaningfully propagate a soluble signal. The analysis reveals that: (i) this process is governed by a single, key, dimensionless group that is a ratio of biological parameters and physicochemical determinants; (ii) this ratio has a maximal value; (iii) for realistic values of the parameters contained in this dimensionless group, it is estimated that the domain that a single cell can effectively communicate in is ≈250 μm in size; and (iv) the communication within this domain takes place in 10–30 minutes. These results have fundamental implications for interpretation of organ physiology and for engineering tissue function ex vivo.