956 resultados para bounded lattices
Resumo:
The deposition of ultrasonic energy in tissue can cause tissue damage due to local heating. For pressures above a critical threshold, cavitation will occur in tissue and bubbles will be created. These oscillating bubbles can induce a much larger thermal energy deposition in the local region. Traditionally, clinicians and researchers have not exploited this bubble-enhanced heating since cavitation behavior is erratic and very difficult to control. The present work is an attempt to control and utilize this bubble-enhanced heating. First, by applying appropriate bubble dynamic models, limits on the asymptotic bubble size distribution are obtained for different driving pressures at 1 MHz. The size distributions are bounded by two thresholds: the bubble shape instability threshold and the rectified diffusion threshold. The growth rate of bubbles in this region is also given, and the resulting time evolution of the heating in a given insonation scenario is modeled. In addition, some experimental results have been obtained to investigate the bubble-enhanced heating in an agar and graphite based tissue- mimicking material. Heating as a function of dissolved gas concentrations in the tissue phantom is investigated. Bubble-based contrast agents are introduced to investigate the effect on the bubble-enhanced heating, and to control the initial bubble size distribution. The mechanisms of cavitation-related bubble heating are investigated, and a heating model is established using our understanding of the bubble dynamics. By fitting appropriate bubble densities in the ultrasound field, the peak temperature changes are simulated. The results for required bubble density are given. Finally, a simple bubbly liquid model is presented to estimate the shielding effects which may be important even for low void fraction during high intensity focused ultrasound (HIFU) treatment.
Resumo:
We investigate the efficient learnability of unions of k rectangles in the discrete plane (1,...,n)[2] with equivalence and membership queries. We exhibit a learning algorithm that learns any union of k rectangles with O(k^3log n) queries, while the time complexity of this algorithm is bounded by O(k^5log n). We design our learning algorithm by finding "corners" and "edges" for rectangles contained in the target concept and then constructing the target concept from those "corners" and "edges". Our result provides a first approach to on-line learning of nontrivial subclasses of unions of intersections of halfspaces with equivalence and membership queries.
Resumo:
This paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. We describe a new genetic algorithm, the merged genetic algorithm, and prove that for the class of monotonic functions, the algorithm finds the optimal solution, and does so with an exponential convergence rate. The analysis pertains to the ideal behavior of the algorithm where the main task reduces to showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. We take exponential convergence to be indicative of efficient solvability for the sample-bounded algorithm, although a sampling theory is needed to better relate the limit behavior to actual behavior. The paper concludes with a discussion of some immediate problems that lie ahead.
Resumo:
This paper describes an algorithm for scheduling packets in real-time multimedia data streams. Common to these classes of data streams are service constraints in terms of bandwidth and delay. However, it is typical for real-time multimedia streams to tolerate bounded delay variations and, in some cases, finite losses of packets. We have therefore developed a scheduling algorithm that assumes streams have window-constraints on groups of consecutive packet deadlines. A window-constraint defines the number of packet deadlines that can be missed in a window of deadlines for consecutive packets in a stream. Our algorithm, called Dynamic Window-Constrained Scheduling (DWCS), attempts to guarantee no more than x out of a window of y deadlines are missed for consecutive packets in real-time and multimedia streams. Using DWCS, the delay of service to real-time streams is bounded even when the scheduler is overloaded. Moreover, DWCS is capable of ensuring independent delay bounds on streams, while at the same time guaranteeing minimum bandwidth utilizations over tunable and finite windows of time. We show the conditions under which the total demand for link bandwidth by a set of real-time (i.e., window-constrained) streams can exceed 100% and still ensure all window-constraints are met. In fact, we show how it is possible to guarantee worst-case per-stream bandwidth and delay constraints while utilizing all available link capacity. Finally, we show how best-effort packets can be serviced with fast response time, in the presence of window-constrained traffic.
Resumo:
Mitchell defined and axiomatized a subtyping relationship (also known as containment, coercibility, or subsumption) over the types of System F (with "→" and "∀"). This subtyping relationship is quite simple and does not involve bounded quantification. Tiuryn and Urzyczyn quite recently proved this subtyping relationship to be undecidable. This paper supplies a new undecidability proof for this subtyping relationship. First, a new syntax-directed axiomatization of the subtyping relationship is defined. Then, this axiomatization is used to prove a reduction from the undecidable problem of semi-unification to subtyping. The undecidability of subtyping implies the undecidability of type checking for System F extended with Mitchell's subtyping, also known as "F plus eta".
Resumo:
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.
Resumo:
System F is the well-known polymorphically-typed λ-calculus with universal quantifiers ("∀"). F+η is System F extended with the eta rule, which says that if term M can be given type τ and M η-reduces to N, then N can also be given the type τ. Adding the eta rule to System F is equivalent to adding the subsumption rule using the subtyping ("containment") relation that Mitchell defined and axiomatized [Mit88]. The subsumption rule says that if M can be given type τ and τ is a subtype of type σ, then M can be given type σ. Mitchell's subtyping relation involves no extensions to the syntax of types, i.e., no bounded polymorphism and no supertype of all types, and is thus unrelated to the system F≤("F-sub"). Typability for F+η is the problem of determining for any term M whether there is any type τ that can be given to it using the type inference rules of F+η. Typability has been proven undecidable for System F [Wel94] (without the eta rule), but the decidability of typability has been an open problem for F+η. Mitchell's subtyping relation has recently been proven undecidable [TU95, Wel95b], implying the undecidability of "type checking" for F+η. This paper reduces the problem of subtyping to the problem of typability for F+η, thus proving the undecidability of typability. The proof methods are similar in outline to those used to prove the undecidability of typability for System F, but the fine details differ greatly.
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
We propose a new technique for efficiently delivering popular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate encodings of popular files, of which only a small sliding window is cached at any time instant, even to satisfy an unbounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our Cyclone server as a Linux kernel subsystem.
Resumo:
Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.
Resumo:
Cultural Marxist Theory, commonly known as theory, enjoyed a moment of extraordinary success in the 1970s, when the works of leading post-war French philosophers were published in English. After relocating to Anglophone academia, however, theory disavowed its original concerns and lost its ambition to understand the world as a whole, becoming the play of heterogeneities associated with postcolonialism, multiculturalism and identity politics, commonly referred to as postmodern theory. This turn, which took place during a period that seemed to have spelt the death of Marxism, the 1990s, induced many of its supporters to engage in an ongoing funeral wake designating the merits of theory and dreaming its resurgence. According to them, had theory been resurrected in historical circumstances completely different from those which had led to its rise, it would have never reacquired the significance that had originally connoted it. This thesis demonstrates how theory has survived its demise and entirely regained its prominence in our socio-political context marked by the effects of the latest crisis of capitalism and by the global threat of terrorisms rooted in messianic eschatologies. In its current form theory does no longer need to show allegiance to certain intellectual stances or political groupings in order to produce important reformulations of the projects it once gave life to. Though less overtly radical and epistemologically bounded, theory remains a necessary form of enquiry justified by the political commitment which originated it in the first place. Its voice continues to speak to us about justice ‘where it is not yet, not yet there, where it is no longer’ (Derrida, 1993, XVIII).
A mathematical theory of stochastic microlensing. II. Random images, shear, and the Kac-Rice formula
Resumo:
Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.
Resumo:
Marine protected areas (MPAs) are often implemented to conserve or restore species, fisheries, habitats, ecosystems, and ecological functions and services; buffer against the ecological effects of climate change; and alleviate poverty in coastal communities. Scientific research provides valuable insights into the social and ecological impacts of MPAs, as well as the factors that shape these impacts, providing useful guidance or "rules of thumb" for science-based MPA policy. Both ecological and social factors foster effective MPAs, including substantial coverage of representative habitats and oceanographic conditions; diverse size and spacing; protection of habitat bottlenecks; participatory decisionmaking arrangements; bounded and contextually appropriate resource use rights; active and accountable monitoring and enforcement systems; and accessible conflict resolution mechanisms. For MPAs to realize their full potential as a tool for ocean governance, further advances in policy-relevant MPA science are required. These research frontiers include MPA impacts on nontarget and wide-ranging species and habitats; impacts beyond MPA boundaries, on ecosystem services, and on resource-dependent human populations, as well as potential scale mismatches of ecosystem service flows. Explicitly treating MPAs as "policy experiments" and employing the tools of impact evaluation holds particular promise as a way for policy-relevant science to inform and advance science-based MPA policy. © 2011 Wiley Periodicals, Inc.
Resumo:
During the summer of 1994, Archaeology in Annapolis conducted archaeological investigations of the city block bounded by Franklin, South and Cathedral Streets in the city of Annapolis. This Phase III excavation was conducted as a means to identify subsurface cultural resources in the impact area associated with the proposed construction of the Anne Arundel County Courthouse addition. This impact area included both the upper and lower parking lots used by Courthouse employees. Investigations were conducted in the form of mechanical trenching and hand excavated units. Excavations in the upper lot area yielded significant information concerning the interior area of the block. Known as Bellis Court, this series of rowhouses was constructed in the late nineteenth century and was used as rental properties by African-Americans. The dwellings remained until the middle of the twentieth century when they were demolished in preparation for the construction of a Courthouse addition. Portions of the foundation of a house owned by William H. Bellis in the 1870s were also exposed in this area. Construction of this house was begun by William Nicholson around 1730 and completed by Daniel Dulany in 1732/33. It was demolished in 1896 by James Munroe, a Trustee for Bellis. Excavations in the upper lot also revealed the remains of a late seventeenth/early eighteenth century wood-lined cellar, believed to be part of the earliest known structure on Lot 58. After an initially rapid deposition of fill around 1828, this cellar was gradually covered with soil throughout the remainder of the nineteenth century. The fill deposit in the cellar feature yielded a mixed assemblage of artifacts that included sherds of early materials such as North Devon gravel-tempered earthenware, North Devon sgraffito and Northem Italian slipware, along with creamware, pearlware and whiteware. In the lower parking lot, numerous artifacts were recovered from yard scatter associated with the houses that at one time fronted along Cathedral Street and were occupied by African- Americans. An assemblage of late seventeenth century/early eighteenth century materials and several slag deposits from an early forge were recovered from this second area of study. The materials associated with the forge, including portions of a crucible, provided evidence of some of the earliest industry in Annapolis. Investigations in both the upper and lower parking lots added to the knowledge of the changing landscape within the project area, including a prevalence of open space in early periods, a surprising survival of impermanent structures, and a gradual regrading and filling of the block with houses and interior courts. Excavations at the Anne Arundel County Courthouse proved this to be a multi-component site, rich in cultural resources from Annapolis' Early Settlement Period through its Modern Period (as specified by Maryland's Comprehensive Historic Preservation Plan (Weissman 1986)). This report provides detailed interpretations of the archaeological findings of these Phase III investigations.