921 resultados para Spectrally bounded


Relevância:

10.00% 10.00%

Publicador:

Resumo:

System F is the well-known polymorphically-typed λ-calculus with universal quantifiers ("∀"). F+η is System F extended with the eta rule, which says that if term M can be given type τ and M η-reduces to N, then N can also be given the type τ. Adding the eta rule to System F is equivalent to adding the subsumption rule using the subtyping ("containment") relation that Mitchell defined and axiomatized [Mit88]. The subsumption rule says that if M can be given type τ and τ is a subtype of type σ, then M can be given type σ. Mitchell's subtyping relation involves no extensions to the syntax of types, i.e., no bounded polymorphism and no supertype of all types, and is thus unrelated to the system F≤("F-sub"). Typability for F+η is the problem of determining for any term M whether there is any type τ that can be given to it using the type inference rules of F+η. Typability has been proven undecidable for System F [Wel94] (without the eta rule), but the decidability of typability has been an open problem for F+η. Mitchell's subtyping relation has recently been proven undecidable [TU95, Wel95b], implying the undecidability of "type checking" for F+η. This paper reduces the problem of subtyping to the problem of typability for F+η, thus proving the undecidability of typability. The proof methods are similar in outline to those used to prove the undecidability of typability for System F, but the fine details differ greatly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new technique for efficiently delivering popular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate encodings of popular files, of which only a small sliding window is cached at any time instant, even to satisfy an unbounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our Cyclone server as a Linux kernel subsystem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The demand for optical bandwidth continues to increase year on year and is being driven primarily by entertainment services and video streaming to the home. Current photonic systems are coping with this demand by increasing data rates through faster modulation techniques, spectrally efficient transmission systems and by increasing the number of modulated optical channels per fibre strand. Such photonic systems are large and power hungry due to the high number of discrete components required in their operation. Photonic integration offers excellent potential for combining otherwise discrete system components together on a single device to provide robust, power efficient and cost effective solutions. In particular, the design of optical modulators has been an area of immense interest in recent times. Not only has research been aimed at developing modulators with faster data rates, but there has also a push towards making modulators as compact as possible. Mach-Zehnder modulators (MZM) have proven to be highly successful in many optical communication applications. However, due to the relatively weak electro-optic effect on which they are based, they remain large with typical device lengths of 4 to 7 mm while requiring a travelling wave structure for high-speed operation. Nested MZMs have been extensively used in the generation of advanced modulation formats, where multi-symbol transmission can be used to increase data rates at a given modulation frequency. Such nested structures have high losses and require both complex fabrication and packaging. In recent times, it has been shown that Electro-absorption modulators (EAMs) can be used in a specific arrangement to generate Quadrature Phase Shift Keying (QPSK) modulation. EAM based QPSK modulators have increased potential for integration and can be made significantly more compact than MZM based modulators. Such modulator designs suffer from losses in excess of 40 dB, which limits their use in practical applications. The work in this thesis has focused on how these losses can be reduced by using photonic integration. In particular, the integration of multiple lasers with the modulator structure was considered as an excellent means of reducing fibre coupling losses while maximising the optical power on chip. A significant difficultly when using multiple integrated lasers in such an arrangement was to ensure coherence between the integrated lasers. The work investigated in this thesis demonstrates for the first time how optical injection locking between discrete lasers on a single photonic integrated circuit (PIC) can be used in the generation of coherent optical signals. This was done by first considering the monolithic integration of lasers and optical couplers to form an on chip optical power splitter, before then examining the behaviour of a mutually coupled system of integrated lasers. By operating the system in a highly asymmetric coupling regime, a stable phase locking region was found between the integrated lasers. It was then shown that in this stable phase locked region the optical outputs of each laser were coherent with each other and phase locked to a common master laser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cultural Marxist Theory, commonly known as theory, enjoyed a moment of extraordinary success in the 1970s, when the works of leading post-war French philosophers were published in English. After relocating to Anglophone academia, however, theory disavowed its original concerns and lost its ambition to understand the world as a whole, becoming the play of heterogeneities associated with postcolonialism, multiculturalism and identity politics, commonly referred to as postmodern theory. This turn, which took place during a period that seemed to have spelt the death of Marxism, the 1990s, induced many of its supporters to engage in an ongoing funeral wake designating the merits of theory and dreaming its resurgence. According to them, had theory been resurrected in historical circumstances completely different from those which had led to its rise, it would have never reacquired the significance that had originally connoted it. This thesis demonstrates how theory has survived its demise and entirely regained its prominence in our socio-political context marked by the effects of the latest crisis of capitalism and by the global threat of terrorisms rooted in messianic eschatologies. In its current form theory does no longer need to show allegiance to certain intellectual stances or political groupings in order to produce important reformulations of the projects it once gave life to. Though less overtly radical and epistemologically bounded, theory remains a necessary form of enquiry justified by the political commitment which originated it in the first place. Its voice continues to speak to us about justice ‘where it is not yet, not yet there, where it is no longer’ (Derrida, 1993, XVIII).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Marine protected areas (MPAs) are often implemented to conserve or restore species, fisheries, habitats, ecosystems, and ecological functions and services; buffer against the ecological effects of climate change; and alleviate poverty in coastal communities. Scientific research provides valuable insights into the social and ecological impacts of MPAs, as well as the factors that shape these impacts, providing useful guidance or "rules of thumb" for science-based MPA policy. Both ecological and social factors foster effective MPAs, including substantial coverage of representative habitats and oceanographic conditions; diverse size and spacing; protection of habitat bottlenecks; participatory decisionmaking arrangements; bounded and contextually appropriate resource use rights; active and accountable monitoring and enforcement systems; and accessible conflict resolution mechanisms. For MPAs to realize their full potential as a tool for ocean governance, further advances in policy-relevant MPA science are required. These research frontiers include MPA impacts on nontarget and wide-ranging species and habitats; impacts beyond MPA boundaries, on ecosystem services, and on resource-dependent human populations, as well as potential scale mismatches of ecosystem service flows. Explicitly treating MPAs as "policy experiments" and employing the tools of impact evaluation holds particular promise as a way for policy-relevant science to inform and advance science-based MPA policy. © 2011 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the summer of 1994, Archaeology in Annapolis conducted archaeological investigations of the city block bounded by Franklin, South and Cathedral Streets in the city of Annapolis. This Phase III excavation was conducted as a means to identify subsurface cultural resources in the impact area associated with the proposed construction of the Anne Arundel County Courthouse addition. This impact area included both the upper and lower parking lots used by Courthouse employees. Investigations were conducted in the form of mechanical trenching and hand excavated units. Excavations in the upper lot area yielded significant information concerning the interior area of the block. Known as Bellis Court, this series of rowhouses was constructed in the late nineteenth century and was used as rental properties by African-Americans. The dwellings remained until the middle of the twentieth century when they were demolished in preparation for the construction of a Courthouse addition. Portions of the foundation of a house owned by William H. Bellis in the 1870s were also exposed in this area. Construction of this house was begun by William Nicholson around 1730 and completed by Daniel Dulany in 1732/33. It was demolished in 1896 by James Munroe, a Trustee for Bellis. Excavations in the upper lot also revealed the remains of a late seventeenth/early eighteenth century wood-lined cellar, believed to be part of the earliest known structure on Lot 58. After an initially rapid deposition of fill around 1828, this cellar was gradually covered with soil throughout the remainder of the nineteenth century. The fill deposit in the cellar feature yielded a mixed assemblage of artifacts that included sherds of early materials such as North Devon gravel-tempered earthenware, North Devon sgraffito and Northem Italian slipware, along with creamware, pearlware and whiteware. In the lower parking lot, numerous artifacts were recovered from yard scatter associated with the houses that at one time fronted along Cathedral Street and were occupied by African- Americans. An assemblage of late seventeenth century/early eighteenth century materials and several slag deposits from an early forge were recovered from this second area of study. The materials associated with the forge, including portions of a crucible, provided evidence of some of the earliest industry in Annapolis. Investigations in both the upper and lower parking lots added to the knowledge of the changing landscape within the project area, including a prevalence of open space in early periods, a surprising survival of impermanent structures, and a gradual regrading and filling of the block with houses and interior courts. Excavations at the Anne Arundel County Courthouse proved this to be a multi-component site, rich in cultural resources from Annapolis' Early Settlement Period through its Modern Period (as specified by Maryland's Comprehensive Historic Preservation Plan (Weissman 1986)). This report provides detailed interpretations of the archaeological findings of these Phase III investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2016 The Author(s).Mid-ocean ridges display tectonic segmentation defined by discontinuities of the axial zone, and geophysical and geochemical observations suggest segmentation of the underlying magmatic plumbing system. Here, observations of tectonic and magmatic segmentation at ridges spreading from fast to ultraslow rates are reviewed in light of influential concepts of ridge segmentation, including the notion of hierarchical segmentation, spreading cells and centralized v. multiple supply of mantle melts. The observations support the concept of quasi-regularly spaced principal magmatic segments, which are 30-50 km long on average at fast- to slow-spreading ridges and fed by melt accumulations in the shallow asthenosphere. Changes in ridge properties approaching or crossing transform faults are often comparable with those observed at smaller offsets, and even very small discontinuities can be major boundaries in ridge properties. Thus, hierarchical segmentation models that suggest large-scale transform fault-bounded segmentation arises from deeper level processes in the asthenosphere than the finer-scale segmentation are not generally supported. The boundaries between some but not all principal magmatic segments defined by ridge axis geophysical properties coincide with geochemical boundaries reflecting changes in source composition or melting processes. Where geochemical boundaries occur, they can coincide with discontinuities of a wide range of scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding tumor vascular dynamics through parameters such as blood flow and oxygenation can yield insight into tumor biology and therapeutic response. Hyperspectral microscopy enables optical detection of hemoglobin saturation or blood velocity by either acquiring multiple images that are spectrally distinct or by rapid acquisition at a single wavelength over time. However, the serial acquisition of spectral images over time prevents the ability to monitor rapid changes in vascular dynamics and cannot monitor concurrent changes in oxygenation and flow rate. Here, we introduce snap shot-multispectral imaging (SS-MSI) for use in imaging the microvasculature in mouse dorsal-window chambers. By spatially multiplexing spectral information into a single-image capture, simultaneous acquisition of dynamic hemoglobin saturation and blood flow over time is achieved down to the capillary level and provides an improved optical tool for monitoring rapid in vivo vascular dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents our most recent advances in synchronous fluorescence (SF) methodology for biomedical diagnostics. The SF method is characterized by simultaneously scanning both the excitation and emission wavelengths while keeping a constant wavelength interval between them. Compared to conventional fluorescence spectroscopy, the SF method simplifies the emission spectrum while enabling greater selectivity, and has been successfully used to detect subtle differences in the fluorescence emission signatures of biochemical species in cells and tissues. The SF method can be used in imaging to analyze dysplastic cells in vitro and tissue in vivo. Based on the SF method, here we demonstrate the feasibility of a time-resolved synchronous fluorescence (TRSF) method, which incorporates the intrinsic fluorescent decay characteristics of the fluorophores. Our prototype TRSF system has clearly shown its advantage in spectro-temporal separation of the fluorophores that were otherwise difficult to spectrally separate in SF spectroscopy. We envision that our previously-tested SF imaging and the newly-developed TRSF methods will combine their proven diagnostic potentials in cancer diagnosis to further improve the efficacy of SF-based biomedical diagnostics.