904 resultados para Lipschitzian bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A central question in community ecology is how the number of trophic links relates to community species richness. For simple dynamical food-web models, link density (the ratio of links to species) is bounded from above as the number of species increases; but empirical data suggest that it increases without bounds. We found a new empirical upper bound on link density in large marine communities with emphasis on fish and squid, using novel methods that avoid known sources of bias in traditional approaches. Bounds are expressed in terms of the diet-partitioning function (DPF): the average number of resources contributing more than a fraction f to a consumer's diet, as a function of f. All observed DPF follow a functional form closely related to a power law, with power-law exponents indepen- dent of species richness at the measurement accuracy. Results imply universal upper bounds on link density across the oceans. However, the inherently scale-free nature of power-law diet partitioning suggests that the DPF itself is a better defined characterization of network structure than link density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simultaneous multithreading processors dynamically share processor resources between multiple threads. In general, shared SMT resources may be managed explicitly, for instance, by dynamically setting queue occupation bounds for each thread as in the DCRA and Hill-Climbing policies. Alternatively, resources may be managed implicitly; that is, resource usage is controlled by placing the desired instruction mix in the resources. In this case, the main resource management tool is the instruction fetch policy which must predict the behavior of each thread (branch mispredictions, long-latency loads, etc.) as it fetches instructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Morphometric study of modern ice masses is useful because many reconstructions of glaciers traditionally draw on their shape for guidance Here we analyse data derived from the surface profiles of 200 modern ice masses-valley glaciers icefields ice caps and ice sheets with length scales from 10º to 10³ km-from different parts of the world Four profile attributes are investigated relief span and two parameters C* and C that result from using Nye s (1952) theoretical parabola as a profile descriptor C* and C respectively measure each profile s aspect ratio and steepness and are found to decrease in size and variability with span This dependence quantifies the competing influences of unconstrained spreading behaviour of ice flow and bed topography on the profile shape of ice masses which becomes more parabolic as span Increases (with C* and C tending to low values of 2.5-3.3 m ½) The same data reveal coherent minimum bounds in C* and C for modern ice masses that we develop into two new methods of palaeo glacier reconstruction In the first method glacial limits are known from moraines and the bounds are used to constrain the lowest palaeo ice surface consistent with modern profiles We give an example of applying this method over a three-dimensional glacial landscape in Kamchatka In the second method we test the plausibility of existing reconstructions by comparing their C* and C against the modern minimum bounds Of the 86 published palaeo ice masses that we put to this test 88% are found to be plausible The search for other morphometric constraints will help us formalise glacier reconstructions and reduce their uncertainty and subjectiveness

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local computation in join trees or acyclic hypertrees has been shown to be linked to a particular algebraic structure, called valuation algebra.There are many models of this algebraic structure ranging from probability theory to numerical analysis, relational databases and various classical and non-classical logics. It turns out that many interesting models of valuation algebras may be derived from semiring valued mappings. In this paper we study how valuation algebras are induced by semirings and how the structure of the valuation algebra is related to the algebraic structure of the semiring. In particular, c-semirings with idempotent multiplication induce idempotent valuation algebras and therefore permit particularly efficient architectures for local computation. Also important are semirings whose multiplicative semigroup is embedded in a union of groups. They induce valuation algebras with a partially defined division. For these valuation algebras, the well-known architectures for Bayesian networks apply. We also extend the general computational framework to allow derivation of bounds and approximations, for when exact computation is not feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A web-service is a remote computational facility which is made available for general use by means of the internet. An orchestration is a multi-threaded computation which invokes remote services. In this paper game theory is used to analyse the behaviour of orchestration evaluations when underlying web-services are unreliable. Uncertainty profiles are proposed as a means of defining bounds on the number of service failures that can be expected during an orchestration evaluation. An uncertainty profile describes a strategic situation that can be analyzed using a zero-sum angel-daemon game with two competing players: an angel a whose objective is to minimize damage to an orchestration and a daemon d who acts in a destructive fashion. An uncertainty profile is assessed using the value of its angel daemon game. It is shown that uncertainty profiles form a partial order which is monotonic with respect to assessment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present pollen records from three sites in south Westland, New Zealand, that document past vegetation and inferred climate change between approximately 30,000 and 15,000 cal. yr BP. Detailed radiocarbon dating of the enclosing sediments at one of those sites, Galway tarn, provides a more robust chronology for the structure and timing of climate-induced vegetation change than has previously been possible in this region. The Kawakawa/Oruanui tephra, a key isochronous marker, affords a precise stratigraphic link across all three pollen records, while other tie points are provided by key pollen-stratigraphic changes which appear to be synchronous across all three sites. Collectively, the records show three episodes in which grassland, interpreted as indicating mostly cold subalpine to alpine conditions, was prevalent in lowland south Westland, separated by phases dominated by subalpine shrubs and montane-lowland trees, indicating milder interstadial conditions. Dating, expressed as a Bayesian-estimated single 'best' age followed in parentheses by younger/older bounds of the 95% confidence modelled age range, indicates that a cold stadial episode, whose onset was marked by replacement of woodland by grassland, occurred between 28,730 (29,390-28,500) and 25,470 (26,090-25,270) cal. yr BP (years before AD, 1950), prior to the deposition of the Kawakawa/Oruanui tephra. Milder interstadial conditions prevailed between 25,470 (26,090-25,270) and 24,400 (24,840-24,120) cal. yr BP and between 22,630 (22,930-22,340) and 21,980 (22,210-21,580) cal. yr BP, separated by a return to cold stadial conditions between 24,400 and 22,630 cal. yr BP. A final episode of grass-dominated vegetation, indicating cold stadial conditions, occurred from 21,980 (22,210-21,580) to 18,490 (18,670-17,950) cal. yr BP. The decline in grass pollen, indicating progressive climate amelioration, was well advanced by 17,370 (17,730-17,110) cal. yr BP, indicating that the onset of the termination in south Westland occurred sometime between ca 18,490 and ca 17,370 cal. yr BP. A similar general pattern of stadials and interstadials is seen, to varying degrees of resolution but generally with lesser chronological control, in many other paleoclimate proxy records from the New Zealand region. This highly resolved chronology of vegetation changes from southwestern New Zealand contributes to the examination of past climate variations in the southwest Pacific region. The stadial and interstadial episodes defined by south Westland pollen records represent notable climate variability during the latter part of the Last Glaciation. Similar climatic patterns recorded farther afield, for example from Antarctica and the Southern Ocean, imply that climate variations during the latter part of the Last Glaciation and the transition to the Holocene interglacial were inter-regionally extensive in the Southern Hemisphere and thus important to understand in detail and to place into a global context. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of self-healing in reconfigurable networks e.g., peer-to-peer and wireless mesh networks. For such networks under repeated attack by an omniscient adversary, we propose a fully distributed algorithm, Xheal, that maintains good expansion and spectral properties of the network, while keeping the network connected. Moreover, Xheal does this while allowing only low stretch and degree increase per node. The algorithm heals global properties like expansion and stretch while only doing local changes and using only local information. We also provide bounds on the second smallest eigenvalue of the Laplacian which captures key properties such as mixing time, conductance, congestion in routing etc. Xheal has low amortized latency and bandwidth requirements. Our work improves over the self-healing algorithms Forgiving tree [PODC 2008] andForgiving graph [PODC 2009] in that we are able to give guarantees on degree and stretch, while at the same time preserving the expansion and spectral properties of the network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of self-healing in networks that are reconfigurable in the sense that they can change their topology during an attack. Our goal is to maintain connectivity in these networks, even in the presence of repeated adversarial node deletion, by carefully adding edges after each attack. We present a new algorithm, DASH, that provably ensures that: 1) the network stays connected even if an adversary deletes up to all nodes in the network; and 2) no node ever increases its degree by more than 2 log n, where n is the number of nodes initially in the network. DASH is fully distributed; adds new edges only among neighbors of deleted nodes; and has average latency and bandwidth costs that are at most logarithmic in n. DASH has these properties irrespective of the topology of the initial network, and is thus orthogonal and complementary to traditional topology- based approaches to defending against attack. We also prove lower-bounds showing that DASH is asymptotically optimal in terms of minimizing maximum degree increase over multiple attacks. Finally, we present empirical results on power-law graphs that show that DASH performs well in practice, and that it significantly outperforms naive algorithms in reducing maximum degree increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new strategy for remote reconfiguration of an antenna array far field radiation pattern is described. The scheme uses a pilot tone co-transmitted with a carrier signal from a location distant from that of a receive antenna array whose far field pattern is to be reconfigured. By mixing the co-transmitted signals locally at each antenna element in the array an IF signal is formed which defines an equivalent array spacing that can be made variable by tuning the frequency of the pilot tone with respect to the RF carrier. This makes the antenna array factor hence far field spatial characteristic reconfigurable on receive. For a 10 x 1 microstrip patch element array we show that the receive pattern can be made to vary from 35 to 10 degrees half power beam width as the difference frequency between the pilot and the carrier at 2.45 GHz varies between 10 MHz and 500 MHz carrier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Ziegler Reservoir fossil site near Snowmass Village, Colorado, provides a unique opportunity to reconstruct high-altitude paleoenvironmental conditions in the Rocky Mountains during the last interglacial period. We used four different techniques to establish a chronological framework for the site. Radiocarbon dating of lake organics, bone collagen, and shell carbonate, and in situ cosmogenic Be and Al ages on a boulder on the crest of a moraine that impounded the lake suggest that the ages of the sediments that hosted the fossils are between ~ 140 ka and > 45 ka. Uranium-series ages of vertebrate remains generally fall within these bounds, but extremely low uranium concentrations and evidence of open-system behavior limit their utility. Optically stimulated luminescence (OSL) ages (n = 18) obtained from fine-grained quartz maintain stratigraphic order, were replicable, and provide reliable ages for the lake sediments. Analysis of the equivalent dose (D) dispersion of the OSL samples showed that the sediments were fully bleached prior to deposition and low scatter suggests that eolian processes were likely the dominant transport mechanism for fine-grained sediments into the lake. The resulting ages show that the fossil-bearing sediments span the latest part of marine isotope stage (MIS) 6, all of MIS 5 and MIS 4, and the earliest part of MIS 3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present results from SEPPCoN, an on-going Survey of the Ensemble Physical Properties of Cometary Nuclei. In this report we discuss mid-infrared measurements of the thermal emission from 89 nuclei of Jupiter-family comets (JFCs). All data were obtained in 2006 and 2007 using imaging capabilities of the Spitzer Space Telescope. The comets were typically 4-5 AU from the Sun when observed and most showed only a point-source with little or no extended emission from dust. For those comets showing dust, we used image processing to photometrically extract the nuclei. For all 89 comets, we present new effective radii, and for 57 comets we present beaming parameters. Thus our survey provides the largest compilation of radiometrically-derived physical properties of nuclei to date. We have six main conclusions: (a) The average beaming parameter of the JFC population is 1.03 ± 0.11, consistent with unity; coupled with the large distance of the nuclei from the Sun, this indicates that most nuclei have Tempel 1-like thermal inertia. Only two of the 57 nuclei had outlying values (in a statistical sense) of infrared beaming. (b) The known JFC population is not complete even at 3 km radius, and even for comets that approach to ˜2 AU from the Sun and so ought to be more discoverable. Several recently-discovered comets in our survey have small perihelia and large (above ˜2 km) radii. (c) With our radii, we derive an independent estimate of the JFC nuclear cumulative size distribution (CSD), and we find that it has a power-law slope of around -1.9, with the exact value depending on the bounds in radius. (d) This power-law is close to that derived by others from visible-wavelength observations that assume a fixed geometric albedo, suggesting that there is no strong dependence of geometric albedo with radius. (e) The observed CSD shows a hint of structure with an excess of comets with radii 3-6 km. (f) Our CSD is consistent with the idea that the intrinsic size distribution of the JFC population is not a simple power-law and lacks many sub-kilometer objects.