907 resultados para Branching Processes in Varying Environments


Relevância:

100.00% 100.00%

Publicador:

Resumo:

These lecture notes are devoted to present several uses of Large Deviation asymptotics in Branching Processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80, 62M05.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: primary 60J80; secondary 60J85, 92C37.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80, 60J10.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80, 60F05

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 60J80.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 60J80.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 60J80, Secondary 60G99.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study shows that light exposure of flocculent material (floc) from the Florida Coastal Everglades (FCE) results in significant dissolved organic matter (DOM) generation through photo-dissolution processes. Floc was collected at two sites along the Shark River Slough (SRS) and irradiated with artificial sunlight. The DOM generated was characterized using elemental analysis and excitation emission matrix fluorescence coupled with parallel factor analysis. To investigate the seasonal variations of DOM photo-generation from floc, this experiment was performed in typical dry (April) and wet (October) seasons for the FCE. Our results show that the dissolved organic carbon (DOC) for samples incubated under dark conditions displayed a relatively small increase, suggesting that microbial processes and/or leaching might be minor processes in comparison to photo-dissolution for the generation of DOM from floc. On the other hand, DOC increased substantially (as much as 259 mgC gC−1) for samples exposed to artificial sunlight, indicating the release of DOM through photo-induced alterations of floc. The fluorescence intensity of both humic-like and protein-like components also increased with light exposure. Terrestrial humic-like components were found to be the main contributors (up to 70%) to the chromophoric DOM (CDOM) pool, while protein-like components comprised a relatively small percentage (up to 16%) of the total CDOM. Simultaneously to the generation of DOC, both total dissolved nitrogen and soluble reactive phosphorus also increased substantially during the photo-incubation period. Thus, the photo-dissolution of floc can be an important source of DOM to the FCE environment, with the potential to influence nutrient dynamics in this system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Iron oxides and arsenic are prevalent in the environment. With the increase interest in the use of iron oxide nanoparticles (IONPs) for contaminant remediation and the high toxicity of arsenic, it is crucial that we evaluate the interactions between IONPs and arsenic. The goal was to understand the environmental behavior of IONPs in regards to their particle size, aggregation and stability, and to determine how this behavior influences IONPs-arsenic interactions. ^ A variety of dispersion techniques were investigated to disperse bare commercial IONPs. Vortex was able to disperse commercial hematite nanoparticles into unstable dispersions with particles in the micrometer size range while probe ultrasonication dispersed the particles into stable dispersions of nanometer size ranges for a prolonged period of time. Using probe ultrasonication and vortex to prepare IONPs suspensions of different particle sizes, the adsorption of arsenite and arsenate to bare hematite nanoparticles and hematite aggregates were investigated. To understand the difference in the adsorptive behavior, adsorption kinetics and isotherm parameters were determined. Both arsenite and arsenate were capable of adsorbing to hematite nanoparticles and hematite aggregates but the rate and capacity of adsorption is dependent upon the hematite particle size, the stability of the dispersion and the type of sorbed arsenic species. Once arsenic was adsorbed onto the hematite surface, both iron and arsenic can undergo redox transformation both microbially and photochemically and these processes can be intertwined. Arsenic speciation studies in the presence of hematite particles were performed and the effect of light on the redox process was preliminary quantified. The redox behavior of arsenite and arsenate were different depending on the hematite particle size, the stability of the suspension and the presence of environmental factors such as microbes and light. The results from this study are important and have significant environmental implications as arsenic mobility and bioavailability can be affected by its adsorption to hematite particles and by its surface mediated redox transformation. Moreover, this study furthers our understanding on how the particle size influences the interactions between IONPs and arsenic thereby clarifying the role of IONPs in the biogeochemical cycling of arsenic.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An awareness of mercury (Hg) contamination in various aquatic environments around the world has increased over the past decade, mostly due to its ability to concentrate in the biota. Because the presence and distribution of Hg in aquatic systems depend on many factors (e.g., pe, pH, salinity, temperature, organic and inorganic ligands, sorbents, etc.), it is crucial to understand its fate and transport in the presence of complexing constituents and natural sorbents, under those different factors. An improved understanding of the subject will support the selection of monitoring, remediation, and restoration technologies. The coupling of equilibrium chemical reactions with transport processes in the model PHREEQC offers an advantage in simulating and predicting the fate and transport of aqueous chemical species of interest. Thus, a great variety of reactive transport problems could be addressed in aquatic systems with boundary conditions of specific interest. Nevertheless, PHREEQC lacks a comprehensive thermodynamic database for Hg. Therefore, in order to use PHREEQC to address the fate and transport of Hg in aquatic environments, it is necessary to expand its thermodynamic database, confirm it and then evaluate it in applications where potential exists for its calibration and continued validation. The objectives of this study were twofold: 1) to develop, expand, and confirm the Hg database of the hydrogeochemical PHREEQC to enhance its capability to simulate the fate of Hg species in the presence of complexing constituents and natural sorbents under different conditions of pH, redox, salinity and temperature; and 2) to apply and evaluate the new database in flow and transport scenarios, at two field test beds: Oak Ridge Reservation, Oak Ridge, TN and Everglades National Park, FL, where Hg is present and is of much concern. Overall, this research enhanced the capability of the PHREEQC model to simulate the coupling of the Hg reactions in transport conditions. It also demonstrated its usefulness when applied to field situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.