999 resultados para New Boston
Resumo:
http://www.archive.org/details/northoregonind00amerrich
Resumo:
http://www.archive.org/details/samsonoccom00loverich/
Resumo:
Two new notions of reduction for terms of the λ-calculus are introduced and the question of whether a λ-term is beta-strongly normalizing is reduced to the question of whether a λ-term is merely normalizing under one of the new notions of reduction. This leads to a new way to prove beta-strong normalization for typed λ-calculi. Instead of the usual semantic proof style based on Girard's "candidats de réductibilité'', termination can be proved using a decreasing metric over a well-founded ordering in a style more common in the field of term rewriting. This new proof method is applied to the simply-typed λ-calculus and the system of intersection types.
Resumo:
This is an addendum to our technical report BUCS TR-94-014 of December 19, 1994. It clarifies some statements, adds information on some related research, includes a comparison with research be de Groote, and fixes two minor mistakes in a proof.
Resumo:
There are several proofs now for the stability of Toom's example of a two-dimensional stable cellular automaton and its application to fault-tolerant computation. Simon and Berman simplified and strengthened Toom's original proof: the present report is simplified exposition of their proof.
Resumo:
The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.
Resumo:
The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).
Resumo:
While ATM bandwidth-reservation techniques are able to offer the guarantees necessary for the delivery of real-time streams in many applications (e.g. live audio and video), they suffer from many disadvantages that make them inattractive (or impractical) for many others. These limitations coupled with the flexibility and popularity of TCP/IP as a best-effort transport protocol have prompted the network research community to propose and implement a number of techniques that adapt TCP/IP to the Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services in ATM network environments. This allows these environments to smoothly integrate (and make use of) currently available TCP-based applications and services without much (if any) modifications. However, recent studies have shown that TCP/IP, when implemented over ATM networks, is susceptible to serious performance limitations. In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We start with an overview of the protocol. Next, we analytically characterize the dynamic redundancy control features of TCP Boston. Next, We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).
Resumo:
We present a technique to derive depth lower bounds for quantum circuits. The technique is based on the observation that in circuits without ancillae, only a few input states can set all the control qubits of a Toffoli gate to 1. This can be used to selectively remove large Toffoli gates from a quantum circuit while keeping the cumulative error low. We use the technique to give another proof that parity cannot be computed by constant depth quantum circuits without ancillæ.
Resumo:
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors woring at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when eveidence variously suggests that and object's class is car, truck, or airplane. The methods described her address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the autonomated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierachical knowlege structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to image domain.
Resumo:
When tragedy strikes a group, only some group members characteristically rush to the aid of the victims. What motivates the altruism of these exceptional individuals? Here, we provide one set of answers based on data collected before and shortly after the 15 April 2013, Boston Marathon bombings. The results of three studies indicated that Americans who were strongly “fused” with their country were especially inclined to provide various forms of support to the bombing victims. Moreover, the degree to which participants reported perceiving fellow Americans as psychological kin statistically mediated links between fusion and pro-group outcomes. Together, these findings shed new light on relationships between personal and group identity, cognitive representations of group members, and personally costly, pro-group actions.
Resumo:
A committee of the Mars Exploration Program Analysis Group (MEPAG) has reviewed and updated the description of Special Regions on Mars as places where terrestrial organisms might replicate (per the COSPAR Planetary Protection Policy). This review and update was conducted by an international team (SR-SAG2) drawn from both the biological science and Mars exploration communities, focused on understanding when and where Special Regions could occur. The study applied recently available data about martian environments and about terrestrial organisms, building on a previous analysis of Mars Special Regions (2006) undertaken by a similar team. Since then, a new body of highly relevant information has been generated from the Mars Reconnaissance Orbiter (launched in 2005) and Phoenix (2007) and data from Mars Express and the twin Mars Exploration Rovers (all 2003). Results have also been gleaned from the Mars Science Laboratory (launched in 2011). In addition to Mars data, there is a considerable body of new data regarding the known environmental limits to life on Earth—including the potential for terrestrial microbial life to survive and replicate under martian environmental conditions. The SR-SAG2 analysis has included an examination of new Mars models relevant to natural environmental variation in water activity and temperature; a review and reconsideration of the current parameters used to define Special Regions; and updated maps and descriptions of the martian environments recommended for treatment as “Uncertain” or “Special” as natural features or those potentially formed by the influence of future landed spacecraft. Significant changes in our knowledge of the capabilities of terrestrial organisms and the existence of possibly habitable martian environments have led to a new appreciation of where Mars Special Regions may be identified and protected. The SR-SAG also considered the impact of Special Regions on potential future human missions to Mars, both as locations of potential resources and as places that should not be inadvertently contaminated by human activity. Key Words: Martian environments—Mars astrobiology—Extreme environment microbiology—Planetary protection—Exploration resources. Astrobiology 14, 887–968.
Resumo:
Despite recent advances in the understanding of the interplay between a dynamic physical environment and phylogeography in Europe, the origins of contemporary Irish biota remain uncertain. Current thinking is that Ireland was colonized post-glacially from southern European refugia, following the end of the last glacial maximum(LGM), some 20 000 years BP. The Leisler’s bat (Nyctalus leisleri), one of the few native Irish mammal species, is widely distributed throughout Europe but, with the exception of Ireland, is generally rare and considered vulnerable. We investigate the origins and phylogeographic relationships of Irish populations in relation to those across Europe, including the closely related species N. azoreum. We use a combination of approaches, including mitochondrial and nuclear DNA markers, in addition to approximate Bayesian computation and palaeo-climatic species distribution modelling. Molecular analyses revealed two distinct and diverse European mitochondrialDNAlineages,which probably diverged in separate glacial refugia. Awestern lineage, restricted to Ireland, Britain and the Azores, comprises Irish and British N. leisleri and N. azoreum specimens; an eastern lineage is distributed throughout mainland Europe. Palaeo-climatic projections indicate suitable habitats during the LGM, including known glacial refugia, in addition to potential novel cryptic refugia along the western fringe of Europe. These results may be applicable to populations of many species.