923 resultados para Large Size


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of large predators on lower trophic levels in oligotrophic, structurally complex, and frequently disturbed aquatic environments is generally thought to be limited. We looked for effects of large predators in two semi-permanent, spikerush-dominated marshes by excluding large fish (>12 mm body depth) and similarly sized herpetofauna from 1 m2 cages (exclosures) for 2 weeks. The exclosures allowed for colonization by intermediate (in size and trophic position) consumers, such as small fish, shrimp, and crayfish. Exclosures were compared to control cages that allowed large fish to move freely in and out. At the end of the experiment, intermediate-consumer densities were higher in exclosures than in controls at both sites. Decapod crustaceans, especially the riverine grass shrimp (Palaemonetes paludosus), accounted for the majority of the response. Effects of large fish on shrimp were generally consistent across sites, but per capita effects were sensitive to estimates of predator density. Densities of intermediate consumers in our exclosures were similar to marsh densities, while the open controls had lower densities. This suggests that these animals avoided our experimental controls because they were risky relative to the surrounding environment, while the exclosures were neither avoided nor preferred. Although illuminating about the dynamics of open-cage experiments, this finding does not influence the main results of the study. Small primary consumers (mostly small snails, amphipods, and midges) living on floating periphyton mats and in flocculent detritus (“floc”) were less abundant in the exclosures, indicative of a trophic cascade. Periphyton mat characteristics (i.e., biomass, chlorophyll a, TP) were not clearly or consistently affected by the exclosure, but TP in the floc was lower in exclosures. The collective cascading effects of large predators were consistent at both sites despite differences in drought frequency, stem density, and productivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthesizing data from multiple studies generates hypotheses about factors that affect the distribution and abundance of species among ecosystems. Snails are dominant herbivores in many freshwater ecosystems, but there is no comprehensive review of snail density, standing stock, or body size among freshwater ecosystems. We compile data on snail density and standing stock, estimate body size with their quotient, and discuss the major pattern that emerges. We report data from 215 freshwater ecosystems taken from 88 studies that we placed into nine categories. Sixty-five studies reported density, seven reported standing stock, and 16 reported both. Despite the breadth of studies, spatial and temporal sampling scales were limited. Researchers used 25 different sampling devices ranging in area from 0.0015 to 2.5 m2. Most ecosystem categories had similar snail densities, standing stocks, and body sizes suggesting snails shared a similar function among ecosystems. Caribbean karst wetlands were a striking exception with much lower density and standing stock, but large body size. Disparity in body size results from the presence of ampullariids in Caribbean karst wetlands suggesting that biogeography affects the distribution of taxa, and in this case size, among aquatic ecosystems. We propose that resource quality explains the disparity in density and standing stock between Caribbean karst wetlands and other categories. Periphyton in Caribbean karst wetlands has high carbon-to-phosphorous ratios and defensive characteristics that inhibit grazers. Unlike many freshwater ecosystems where snails are key grazers, we hypothesize that a microbial loop captures much of the primary production in Caribbean karst wetlands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hotel Managerment has usually been viewed as a single labor market which allows considerable movement between properties of different sizes and service levels. The authors question this assumption and support the hypothesis that general managers in one type of hotel will have spent a large majority of their careers in hotels of the same type.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotopes are important tools for understanding the trophic roles of elasmobranchs. However, whether different tissues provide consistent stable isotope values within an individual are largely unknown. To address this, the relationships among carbon and nitrogen isotope values were quantified for blood, muscle, and fin from juvenile bull sharks (Carcharhinus leucas) and blood and fin from large tiger sharks (Galeocerdo cuvier) collected in two different ecosystems. We also investigated the relationship between shark size and the magnitude of differences in isotopic values between tissues. Isotope values were significantly positively correlated for all paired tissue comparisons, but R2 values were much higher for δ13C than for δ15N. Paired differences between isotopic values of tissues were relatively small but varied significantly with shark total length, suggesting that shark size can be an important factor influencing the magnitude of differences in isotope values of different tissues. For studies of juvenile sharks, care should be taken in using slow turnover tissues like muscle and fin, because they may retain a maternal signature for an extended time. Although correlations were relatively strong, results suggest that correction factors should be generated for the desired study species and may only allow coarse-scale comparisons between studies using different tissue types.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of the ecosystem approach and models for the management of ocean marine resources requires easy access to standard validated datasets of historical catch data for the main exploited species. They are used to measure the impact of biomass removal by fisheries and to evaluate the models skills, while the use of standard dataset facilitates models inter-comparison. North Atlantic albacore tuna is exploited all year round by longline and in summer and autumn by surface fisheries and fishery statistics compiled by the International Commission for the Conservation of Atlantic Tunas (ICCAT). Catch and effort with geographical coordinates at monthly spatial resolution of 1° or 5° squares were extracted for this species with a careful definition of fisheries and data screening. In total, thirteen fisheries were defined for the period 1956-2010, with fishing gears longline, troll, mid-water trawl and bait fishing. However, the spatialized catch effort data available in ICCAT database represent a fraction of the entire total catch. Length frequencies of catch were also extracted according to the definition of fisheries above for the period 1956-2010 with a quarterly temporal resolution and spatial resolutions varying from 1°x 1° to 10°x 20°. The resolution used to measure the fish also varies with size-bins of 1, 2 or 5 cm (Fork Length). The screening of data allowed detecting inconsistencies with a relatively large number of samples larger than 150 cm while all studies on the growth of albacore suggest that fish rarely grow up over 130 cm. Therefore, a threshold value of 130 cm has been arbitrarily fixed and all length frequency data above this value removed from the original data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The terrigenous sediment proportion of the deep sea sediments from off Northwest Africa has been studied in order to distinguish between the aeolian and the fluvial sediment supply. The present and fossil Saharan dust trajectories were recognized from the distribution patterns of the aeolian sediment. The following timeslices have been investigated: Present, 6,000, 12,000 and 18,000 y. B. P. Furthermore, the quantity of dust deposited off the Saharan coast has been estimated. For this purpose, 80 surface sediment samples and 34 sediment cores have been analysed. The stratigraphy of the cores has been achieved from oxygen isotopic curves, 14C-dating, foraminiferal transfer temperatures, and carbonate contents. Silt sized biogenic opal generally accounts for less than 2 % of the total insoluble sediment proportion. Only under productive upwelling waters and off river mouths, the opal proportion exceeds 2 % significantly. The modern terrigenous sediment from off the Saharan coast is generally characterized by intensely stained quartz grains. They indicate an origin from southern Saharan and Sahelian laterites, and a zonal aeolian transport in midtropospheric levels, between 1.5 an 5.5 km, by 'Harmattan' Winds. The dust particles follow large outbreaks of Saharan air across the African coast between 15° and 21° N. Their trajectories are centered at about 18° N and continue further into a clockwise gyre situated south of the Canary Islands. This course is indicated by a sickle-shaped tongue of coarser grain sizes in the deep-sea sediment. Such loess-sized terrigenous particles only settle within a zone extending to 700 km offshore. Fine silt and clay sized particles, with grain sizes smaller than 10- 15 µm, drift still further west and can be traced up to more than 4,000 km distance from their source areas. Additional terrigenous silt which is poor in stained quartz occurs within a narrow zone off the western Sahara between 20° and 27° N only. It depicts the present dust supply by the trade winds close to the surface. The dust load originates from the northwestern Sahara, the Atlas Mountains and coastal areas, which contain a particularly low amount of stained quartz. The distribution pattern of these pale quartz sediments reveals a SSW-dispersal of dust being consistent with the present trade wind direction from the NNE. In comparison to the sediments from off the Sahara and the deeper subtropical Atlantic, the sediments off river mouths, in particular off the Senegal river, are characterized by an additional input of fine grained terrigenous particles (< 6 µm). This is due to fluvial suspension load. The fluvial discharge leads to a relative excess of fine grained particles and is observed in a correlation diagram of the modal grain sizes of terrigenous silt with the proportion of fine fraction (< 6 µm). The aeolian sediment contribution by the Harmattan Winds strongly decreased during the Climatic Optimum at 6,000 y. B. P. The dust discharge of the trade winds is hardly detectable in the deep-sea sediments. This probably indicates a weakened atmospheric circulation. In contrast, the fluvial sediment supply reached a maximum, and can be traced to beyond Cape Blanc. Thus, the Saharan climate was more humid at 6,000 y B. P. A latitudinal shift of the Harmattan driven dust outbreaks cannot be observed. Also during the Glacial, 18,000 y. B. P., Harmattan dust transport crossed the African coast at latitudes of 15°-20° N. Its sediment load increased intensively, and markedly coarser grains spread further into the Atlantic Ocean. An expanded zone of pale-quart sediments indicates an enhanced dust supply by the trade winds blowing from the NE. No synglacial fluvial sediment contribution can be recognized between 12° and 30° N. This indicates a dry glacial climate and a strengthened stmospheric circulation over the Sahelian and Saharan region. The climatic transition pahes, at 12, 000 y. B. P., between the last Glacial and the Intergalcial, which is compareable to the Alerod in Europe, is characterized by an intermediate supply of terrigenous particles. The Harmattan dust transport wa weaker than during the Glacial. The northeasterly trade winds were still intensive. River supply reached a first postglacial maximum seaward of the Senegal river mouth. This indicates increasing humidity over the southern Sahara and a weaker atmospheric circulation as compared to the glacial. The accumulation rates of the terrigenous silt proportion (> 6 µm) decrcase exponentially with increasing distance from the Saharan coast. Those of the terrigenous fine fraction (< 6 µm) follow the same trend and show almost similar gradients. Accordingly, also the terrigenous fine fraction is believed to result predominantly from aeolian transport. In the Atlantic deep-sea sediments, the annual terrigenous sediment accumulation has fluctuated, from about 60 million tons p. a. during the Late Glacial (13,500-18,000 y. B. P, aeolian supply only) to about 33 million tons p. a. during the Holocene Climatic Optimum (6,000-9,000 y. B. P, mainly fluvial supply), when the river supply has reached a maximum, and to about 45 million tons p. a. during the last 4,000 years B. P. (fluvial supply only south of 18° N).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Standard Cosmological Model is generally accepted by the scientific community, there are still an amount of unresolved issues. From the observable characteristics of the structures in the Universe,it should be possible to impose constraints on the cosmological parameters. Cosmic Voids (CV) are a major component of the LSS and have been shown to possess great potential for constraining DE and testing theories of gravity. But a gap between CV observations and theory still persists. A theoretical model for void statistical distribution as a function of size exists (SvdW) However, the SvdW model has been unsuccesful in reproducing the results obtained from cosmological simulations. This undermines the possibility of using voids as cosmological probes. The goal of our thesis work is to cover the gap between theoretical predictions and measured distributions of cosmic voids. We develop an algorithm to identify voids in simulations,consistently with theory. We inspecting the possibilities offered by a recently proposed refinement of the SvdW (the Vdn model, Jennings et al., 2013). Comparing void catalogues to theory, we validate the Vdn model, finding that it is reliable over a large range of radii, at all the redshifts considered and for all the cosmological models inspected. We have then searched for a size function model for voids identified in a distribution of biased tracers. We find that, naively applying the same procedure used for the unbiased tracers to a halo mock distribution does not provide success- full results, suggesting that the Vdn model requires to be reconsidered when dealing with biased samples. Thus, we test two alternative exten- sions of the model and find that two scaling relations exist: both the Dark Matter void radii and the underlying Dark Matter density contrast scale with the halo-defined void radii. We use these findings to develop a semi-analytical model which gives promising results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 100 cm long sediment sequence was recovered from Beaver Lake in Amery Oasis, East Antarctica, using gravity and piston corers. Sedimentological and mineralogical analyses and the absence of micro and macrofossils indicate that the sediments at the base of the sequence formed under glacial conditions, probably prior to c. 12 500 cal. yr BP. The sediments between c. 81 and 31 cm depth probably formed under subaerial conditions, indicating that isostatic uplift since deglaciation has been substantially less than eustatic sea-level rise and that large areas of the present-day floor of Beaver Lake must have been subaerially exposed following deglaciation. The upper 31 cm of the sediment sequence were deposited under glaciomarine conditions similar to those of today, supporting geomorphic observations that the Holocene was a period of relative sea-level highstand in Amery Oasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New results of geomorphological, seismoacoustic, and lithological investigations on the upper continental slope off the Arkhipo-Osipovka Settlement are presented. Here, a large submarine slump was discovered by seismic survey in 1998. The assumed slump body, up to 200 m thick, rises 50-60 m above the valley floor that cuts the slope. Recent semiliquid mud that overlies laminated slope sediments with possible slump deformations flows down in the valley thalweg. Radiocarbon age inversion recorded in a Holocene sediment section of shelf facies recovered from the upper slope points to the gravity dislocation of sediments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

LAPMv2 is a research software solution specifically developed to allow marine scientists to produce geo-referenced visual maps of the seafloor, known as mosaics, from a set of underwater images and navigation data. LAPMv2 has a graphical user interface that guides the user through the different steps of the mosaicking workflow. LAPMv2 runs on 64-bit Windows, MacOS X and Linux operating systems. There are two versions for each operating system: (1) the WEB-installers (lightweight but require an internet connection during the installation) and (2) the MCR installers (large files but can be installed on computer without internet-connection). The user manual explains how to install and start the program on the different operating systems. Go to http://www.lapm.eu.com for further information about the latest versions of LAPMv2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.