928 resultados para uniform storng consistency
Resumo:
We review the different sources of uncertainty affecting the oxygen isotopic composition of planktonic foraminifera and present a global planktonic foraminifera oxygen isotope data set that has been assembled within the MARGO project for the Late Holocene time slice. The data set consists of over 2100 data from recent sediment with thorough age control, that have been checked for internal consistency. We further examine how the oxygen isotopic composition of fossil foraminifera is related to hydrological conditions, based on published results on living foraminifera from plankton tows and cultures. Oxygen isotopic values (delta18O) of MARGO recent fossil foraminifera are 0.2-0.8 per mil higher than those of living foraminifera. Our results show that this discrepancy is related to the stratification of the upper water mass and generally increases at low latitudes. Therefore, as stratification of surface waters and seasonality depends on climatic conditions, the relationship between temperature and delta18O established on fossil foraminifera from recent sediment must be used with caution in paleoceanographic studies. Before models predicting seasonal flux, abundance and delta18O composition of a foraminiferal population in the sediment are available, we recommend studying relative changes in isotopic composition of fossil planktonic foraminifera. These changes primarily record variations in temperature and oxygen isotopic composition of sea water, although part of the changes might reflect modifications of planktonic foraminifera seasonality or depth habitat
Resumo:
The spatial and temporal dynamics of seagrasses have been well studied at the leaf to patch scales, however, the link to large spatial extent landscape and population dynamics is still unresolved in seagrass ecology. Traditional remote sensing approaches have lacked the temporal resolution and consistency to appropriately address this issue. This study uses two high temporal resolution time-series of thematic seagrass cover maps to examine the spatial and temporal dynamics of seagrass at both an inter- and intra-annual time scales, one of the first globally to do so at this scale. Previous work by the authors developed an object-based approach to map seagrass cover level distribution from a long term archive of Landsat TM and ETM+ images on the Eastern Banks (~200 km**2), Moreton Bay, Australia. In this work a range of trend and time-series analysis methods are demonstrated for a time-series of 23 annual maps from 1988 to 2010 and a time-series of 16 monthly maps during 2008-2010. Significant new insight was presented regarding the inter- and intra-annual dynamics of seagrass persistence over time, seagrass cover level variability, seagrass cover level trajectory, and change in area of seagrass and cover levels over time. Overall we found that there was no significant decline in total seagrass area on the Eastern Banks, but there was a significant decline in seagrass cover level condition. A case study of two smaller communities within the Eastern Banks that experienced a decline in both overall seagrass area and condition are examined in detail, highlighting possible differences in environmental and process drivers. We demonstrate how trend and time-series analysis enabled seagrass distribution to be appropriately assessed in context of its spatial and temporal history and provides the ability to not only quantify change, but also describe the type of change. We also demonstrate the potential use of time-series analysis products to investigate seagrass growth and decline as well as the processes that drive it. This study demonstrates clear benefits over traditional seagrass mapping and monitoring approaches, and provides a proof of concept for the use of trend and time-series analysis of remotely sensed seagrass products to benefit current endeavours in seagrass ecology.
Resumo:
IPOD Leg 49 recovered basalts from 9 holes at 7 sites along 3 transects across the Mid-Atlantic Ridge: 63°N (Reykjanes), 45°N and 36°N (FAMOUS area). This has provided further information on the nature of mantle heterogeneity in the North Atlantic by enabling studies to be made of the variation of basalt composition with depth and with time near critical areas (Iceland and the Azores) where deep mantle plumes are thought to exist. Over 150 samples have been analysed for up to 40 major and trace elements and the results used to place constraints on the petrogenesis of the erupted basalts and hence on the geochemical nature of their source regions. It is apparent that few of the recovered basalts have the geochemical characteristics of typical "depleted" midocean ridge basalts (MORB). An unusually wide range of basalt compositions may be erupted at a single site: the range of rare earth patterns within the short section cored at Site 413, for instance, encompasses the total variation of REE patterns previously reported from the FAMOUS area. Nevertheless it is possible to account for most of the compositional variation at a single site by partial melting processes (including dynamic melting) and fractional crystallization. Partial melting mechanisms seem to be the dominant processes relating basalt compositions, particularly at 36°N and 45°N, suggesting that long-lived sub-axial magma chambers may not be a consistent feature of the slow-spreading Mid-Atlantic Ridge. Comparisons of basalts erupted at the same ridge segment for periods of the order of 35 m.y. (now lying along the same mantle flow line) do show some significant inter-site differences in Rb/Sr, Ce/Yb, 87Sr/86Sr, etc., which cannot be accounted for by fractionation mechanisms and which must reflect heterogeneities in the mantle source. However when hygromagmatophile (HYG) trace element levels and ratios are considered, it is the constancy or consistency of these HYG ratios which is the more remarkable, implying that the mantle source feeding a particular ridge segment was uniform with respect to these elements for periods of the order of 35 m.y. and probably since the opening of the Atlantic. Yet these HYG element ratios at 63°N are very different from those at 45°N and 36°N and significantly different from the values at 22°N and in "MORB". The observed variations are difficult to reconcile with current concepts of mantle plumes and binary mixing models. The mantle is certainly heterogeneous, but there is not simply an "enriched" and a "depleted" source, but rather a range of sources heterogeneous on different scales for different elements - to an extent and volume depending on previous depletion/enrichment events. HYG element ratios offer the best method of defining compositionally different mantle segments since they are little modified by the fractionation processes associated with basalt generation.
Resumo:
This study is a synthesis of paleomagnetic and mineral magnetic results for Sites 819 through 823 of Ocean Drilling Program (ODP) Leg 133, which lie on a transect from the outer edge of the Great Barrier Reef (GBR) down the continental slope to the bottom of the Queensland Trough. Because of viscous remagnetization and pervasive overprinting, few reversal boundaries can be identified in these extremely high-resolution Quaternary sequences. Some of the magnetic instability, and the differences in the quality of the paleomagnetic signal among sites, can be explained in terms of the dissolution of primary iron oxides in the high near-surface geochemical gradients. Well-defined changes in magnetic properties, notably susceptibility, reflect responses to glacio-eustatic sea-level fluctuations and changes in slope sedimentation processes resulting from formation of the GBR. Susceptibility can be used to correlate between adjacent holes at a given site to an accuracy of about 20 cm. Among-site correlation of susceptibility is also possible for certain parts of the sequences and permits (tentative) extension of the reversal chronology. The reversal boundaries that can be identified are generally compatible with the calcareous nannofossil biostratigraphy and demonstrate a high level of biostratigraphic consistency among sites. A revised chronology based on an optimum match with the susceptibility stratigraphy is presented. Throughout most of the sequences there is a strong inverse correlation both between magnetic susceptibility and calcium carbonate content, and between susceptibility and d18O. In the upper, post-GBR, sections a more complicated type of magnetic response occurs during glacial maxima and subsequent transgressions, resulting in a positive correlation between susceptibility and d18O. Prior to and during formation of the outer-reef barrier, the sediments have relatively uniform magnetic properties showing multidomain behavior and displaying cyclic variations in susceptibility related to sea-level change. The susceptibility oscillations are controlled more by carbonate dilution than by variation in terrigenous influx. Establishment of the outer reef between 1.01 and 0.76 Ma restricted the supply of sediment to the slope, causing a four-fold reduction in sedimentation rates and a transition from prograding to aggrading seismic geometries (see other chapters in this volume). The Brunhes/Matuyama boundary and the end of the transition period mark a change to lower and more subdued susceptibility oscillations with higher carbonate contents. The major change in magnetic properties comes at about 0.4 Ma in the aggrading sequence, which contains prominent sharp susceptibility peaks associated with glacial cycles, with distinctive single-domain magnetite and mixed single-domain/superparamagnetic characteristics. Bacterial magnetite has been found in the sediments, particularly where there are high susceptibility peaks, but its importance has not yet been assessed. A possible explanation for the characteristic pattern of magnetic properties in the post-GBR glacial cycles can be found in terms of fluvio-deltaic processes and inter-reefal lagoonal reservoirs that develop when the shelf becomes exposed at low sea-level.
Resumo:
A two-dimensional finite element model of current flow in the front surface of a PV cell is presented. In order to validate this model we perform an experimental test. Later, particular attention is paid to the effects of non-uniform illumination in the finger direction which is typical in a linear concentrator system. Fill factor, open circuit voltage and efficiency are shown to decrease with increasing degree of non-uniform illumination. It is shown that these detrimental effects can be mitigated significantly by reoptimization of the number of front surface metallization fingers to suit the degree of non-uniformity. The behavior of current flow in the front surface of a cell operating at open circuit voltage under non-uniform illumination is discussed in detail.
Resumo:
The thermal annealing of amorphous tracks of nanometer-size diameter generated in lithium niobate (LiNbO3) by Bromine ions at 45 MeV, i.e., in the electronic stopping regime, has been investigated by RBS/C spectrometry in the temperature range from 250°C to 350°C. Relatively low fluences have been used (<1012 cm−2) to produce isolated tracks. However, the possible effect of track overlapping has been investigated by varying the fluence between 3×1011 cm−2 and 1012 cm−2. The annealing process follows a two-step kinetics. In a first stage (I) the track radius decreases linearly with the annealing time. It obeys an Arrhenius-type dependence on annealing temperature with activation energy around 1.5 eV. The second stage (II) operates after the track radius has decreased down to around 2.5 nm and shows a much lower radial velocity. The data for stage I appear consistent with a solid-phase epitaxial process that yields a constant recrystallization rate at the amorphous-crystalline boundary. HRTEM has been used to monitor the existence and the size of the annealed isolated tracks in the second stage. On the other hand, the thermal annealing of homogeneous (buried) amorphous layers has been investigated within the same temperature range, on samples irradiated with Fluorine at 20 MeV and fluences of ∼1014 cm−2. Optical techniques are very suitable for this case and have been used to monitor the recrystallization of the layers. The annealing process induces a displacement of the crystalline-amorphous boundary that is also linear with annealing time, and the recrystallization rates are consistent with those measured for tracks. The comparison of these data with those previously obtained for the heavily damaged (amorphous) layers produced by elastic nuclear collisions is summarily discussed.
Resumo:
The theoretical formulation of the smoothed particle hydrodynamics (SPH) method deserves great care because of some inconsistencies occurring when considering free-surface inviscid flows. Actually, in SPH formulations one usually assumes that (i) surface integral terms on the boundary of the interpolation kernel support are neglected, (ii) free-surface conditions are implicitly verified. These assumptions are studied in detail in the present work for free-surface Newtonian viscous flow. The consistency of classical viscous weakly compressible SPH formulations is investigated. In particular, the principle of virtual work is used to study the verification of the free-surface boundary conditions in a weak sense. The latter can be related to the global energy dissipation induced by the viscous term formulations and their consistency. Numerical verification of this theoretical analysis is provided on three free-surface test cases including a standing wave, with the three viscous term formulations investigated.
Resumo:
We present a concurrent semantics (i.e. a semantics where concurrency is explicitely represented) for CC programs with atomic tells. This allows to derive concurrency, dependency, and nondeterminism information for such languages. The ability to treat failure information puts CLP programs also in the range of applicability of our semantics: although such programs are not concurrent, the concurrency information derived in the semantics may be interpreted as possible parallelism, thus allowing to safely parallelize those computation steps which appear to be concurrent in the net. Dually, the dependency information may also be interpreted as necessary sequentialization, thus possibly exploiting it to schedule CC programs. The fact that the semantical structure contains dependency information suggests a new tell operation, which checks for consistency only the constraints it depends on, achieving a reasonable trade-off between efficiency and atomicity.
Resumo:
La propulsión eléctrica constituye hoy una tecnología muy competitiva y de gran proyección de futuro. Dentro de los diversos motores de plasma existentes, el motor de efecto Hall ha adquirido una gran madurez y constituye un medio de propulsión idóneo para un rango amplio de misiones. En la presente Tesis se estudian los motores Hall con geometría convencional y paredes dieléctricas. La compleja interacción entre los múltiples fenómenos físicos presentes hace que sea difícil la simulación del plasma en estos motores. Los modelos híbridos son los que representan un mejor compromiso entre precisión y tiempo de cálculo. Se basan en utilizar un modelo fluido para los electrones y algoritmos de dinámica de partículas PIC (Particle-In- Cell) para los iones y los neutros. Permiten hacer uso de la hipótesis de cuasineutralidad del plasma, a cambio de resolver separadamente las capas límite (o vainas) que se forman en torno a las paredes de la cámara. Partiendo de un código híbrido existente, llamado HPHall-2, el objetivo de la Tesis doctoral ha sido el desarrollo de un código híbrido avanzado que mejorara la simulación de la descarga de plasma en un motor de efecto Hall. Las actualizaciones y mejoras realizadas en las diferentes partes que componen el código comprenden tanto aspectos teóricos como numéricos. Fruto de la extensa revisión de la algoritmia del código HPHall-2 se han conseguido reducir los errores de precisión un orden de magnitud, y se ha incrementado notablemente su consistencia y robustez, permitiendo la simulación del motor en un amplio rango de condiciones. Algunos aspectos relevantes a destacar en el subcódigo de partículas son: la implementación de un nuevo algoritmo de pesado que permite determinar de forma más precisa el flujo de las magnitudes del plasma; la implementación de un nuevo algoritmo de control de población, que permite tener suficiente número de partículas cerca de las paredes de la cámara, donde los gradientes son mayores y las condiciones de cálculo son más críticas; las mejoras en los balances de masa y energía; y un mejor cálculo del campo eléctrico en una malla no uniforme. Merece especial atención el cumplimiento de la condición de Bohm en el borde de vaina, que en los códigos híbridos representa una condición de contorno necesaria para obtener una solución consistente con el modelo de interacción plasma-pared, y que en HPHall-2 aún no se había resuelto satisfactoriamente. En esta Tesis se ha implementado el criterio cinético de Bohm para una población de iones con diferentes cargas eléctricas y una gran dispersión de velocidades. En el código, el cumplimiento de la condición cinética de Bohm se consigue por medio de un algoritmo que introduce una fina capa de aceleración nocolisional adyacente a la vaina y mide adecuadamente el flujo de partículas en el espacio y en el tiempo. Las mejoras realizadas en el subcódigo de electrones incrementan la capacidad de simulación del código, especialmente en la región aguas abajo del motor, donde se simula la neutralización del chorro del plasma por medio de un modelo de cátodo volumétrico. Sin abordar el estudio detallado de la turbulencia del plasma, se implementan modelos sencillos de ajuste de la difusión anómala de Bohm, que permiten reproducir los valores experimentales del potencial y la temperatura del plasma, así como la corriente de descarga del motor. En cuanto a los aspectos teóricos, se hace especial énfasis en la interacción plasma-pared y en la dinámica de los electrones secundarios libres en el interior del plasma, cuestiones que representan hoy en día problemas abiertos en la simulación de los motores Hall. Los nuevos modelos desarrollados buscan una imagen más fiel a la realidad. Así, se implementa el modelo de vaina de termalización parcial, que considera una función de distribución no-Maxwelliana para los electrones primarios y contabiliza unas pérdidas energéticas más cercanas a la realidad. Respecto a los electrones secundarios, se realiza un estudio cinético simplificado para evaluar su grado de confinamiento en el plasma, y mediante un modelo fluido en el límite no-colisional, se determinan las densidades y energías de los electrones secundarios libres, así como su posible efecto en la ionización. El resultado obtenido muestra que los electrones secundarios se pierden en las paredes rápidamente, por lo que su efecto en el plasma es despreciable, no así en las vainas, donde determinan el salto de potencial. Por último, el trabajo teórico y de simulación numérica se complementa con el trabajo experimental realizado en el Pnnceton Plasma Physics Laboratory, en el que se analiza el interesante transitorio inicial que experimenta el motor en el proceso de arranque. Del estudio se extrae que la presencia de gases residuales adheridos a las paredes juegan un papel relevante, y se recomienda, en general, la purga completa del motor antes del modo normal de operación. El resultado final de la investigación muestra que el código híbrido desarrollado representa una buena herramienta de simulación de un motor Hall. Reproduce adecuadamente la física del motor, proporcionando resultados similares a los experimentales, y demuestra ser un buen laboratorio numérico para estudiar el plasma en el interior del motor. Abstract Electric propulsion is today a very competitive technology and has a great projection into the future. Among the various existing plasma thrusters, the Hall effect thruster has acquired a considerable maturity and constitutes an ideal means of propulsion for a wide range of missions. In the present Thesis only Hall thrusters with conventional geometry and dielectric walls are studied. The complex interaction between multiple physical phenomena makes difficult the plasma simulation in these engines. Hybrid models are those representing a better compromise between precision and computational cost. They use a fluid model for electrons and Particle-In-Cell (PIC) algorithms for ions and neutrals. The hypothesis of plasma quasineutrality is invoked, which requires to solve separately the sheaths formed around the chamber walls. On the basis of an existing hybrid code, called HPHall-2, the aim of this doctoral Thesis is to develop an advanced hybrid code that better simulates the plasma discharge in a Hall effect thruster. Updates and improvements of the code include both theoretical and numerical issues. The extensive revision of the algorithms has succeeded in reducing the accuracy errors in one order of magnitude, and the consistency and robustness of the code have been notably increased, allowing the simulation of the thruster in a wide range of conditions. The most relevant achievements related to the particle subcode are: the implementation of a new weighing algorithm that determines more accurately the plasma flux magnitudes; the implementation of a new algorithm to control the particle population, assuring enough number of particles near the chamber walls, where there are strong gradients and the conditions to perform good computations are more critical; improvements in the mass and energy balances; and a new algorithm to compute the electric field in a non-uniform mesh. It deserves special attention the fulfilment of the Bohm condition at the edge of the sheath, which represents a boundary condition necessary to match consistently the hybrid code solution with the plasma-wall interaction, and remained as a question unsatisfactory solved in the HPHall-2 code. In this Thesis, the kinetic Bohm criterion has been implemented for an ion particle population with different electric charges and a large dispersion in their velocities. In the code, the fulfilment of the kinetic Bohm condition is accomplished by an algorithm that introduces a thin non-collisional layer next to the sheaths, producing the ion acceleration, and measures properly the flux of particles in time and space. The improvements made in the electron subcode increase the code simulation capabilities, specially in the region downstream of the thruster, where the neutralization of the plasma jet is simulated using a volumetric cathode model. Without addressing the detailed study of the plasma turbulence, simple models for a parametric adjustment of the anomalous Bohm difussion are implemented in the code. They allow to reproduce the experimental values of the plasma potential and the electron temperature, as well as the discharge current of the thruster. Regarding the theoretical issues, special emphasis has been made in the plasma-wall interaction of the thruster and in the dynamics of free secondary electrons within the plasma, questions that still remain unsolved in the simulation of Hall thrusters. The new developed models look for results closer to reality, such as the partial thermalization sheath model, that assumes a non-Maxwellian distribution functions for primary electrons, and better computes the energy losses at the walls. The evaluation of secondary electrons confinement within the chamber is addressed by a simplified kinetic study; and using a collisionless fluid model, the densities and energies of free secondary electrons are computed, as well as their effect on the plasma ionization. Simulations show that secondary electrons are quickly lost at walls, with a negligible effect in the bulk of the plasma, but they determine the potential fall at sheaths. Finally, numerical simulation and theoretical work is complemented by the experimental work carried out at the Princeton Plasma Physics Laboratory, devoted to analyze the interesting transitional regime experienced by the thruster in the startup process. It is concluded that the gas impurities adhered to the thruster walls play a relevant role in the transitional regime and, as a general recomendation, a complete purge of the thruster before starting its normal mode of operation it is suggested. The final result of the research conducted in this Thesis shows that the developed code represents a good tool for the simulation of Hall thrusters. The code reproduces properly the physics of the thruster, with results similar to the experimental ones, and represents a good numerical laboratory to study the plasma inside the thruster.
Resumo:
The analytical solution to the one-dimensional absorption–conduction heat transfer problem inside a single glass pane is presented, which correctly takes into account all the relevant physical phenomena: the appearance of multiple reflections, the spectral distribution of solar radiation, the spectral dependence of optical properties, the presence of possible coatings, the non-uniform nature of radiation absorption, and the diffusion of heat by conduction across the glass pane. Additionally to the well established and known direct absorptance αe, the derived solution introduces a new spectral quantity called direct absorptance moment βe, that indicates where in the glass pane is the absorption of radiation actually taking place. The theoretical and numerical comparison of the derived solution with existing approximate thermal models for the absorption–conduction problem reveals that the latter ones work best for low-absorbing uncoated single glass panes, something not necessarily fulfilled by modern glazings.
Resumo:
Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
We consider here uniform distributed pushdown automata systems (UDPAS), namely distributed pushdown automata systems having all components identical pushdown automata. We consider here just a single protocol for activating/deactivating components, namely a component stays active as long as it can perform moves, as well as two ways of accepting the input word: by empty stacks (all components have empty stacks) or by final states (all components are in final states), when the input word is completely read. We mainly investigate the computational power of UDPAS accepting by empty stacks and a few decidability and closure properties of the families of languages they define. Some directions for further work and open problems are also discussed.
Resumo:
Comment on "Another look at the uniform rope sliding over the edge of a smooth table"