904 resultados para 291605 Processor Architectures
Resumo:
Recent years have seen an increasing number of academics attempt to write more process-oriented and 'nonrepresentational' accounts of landscape. Drawing upon this literature, I discuss a number of the movements, materialities, and practices entailed in constructing England's M1 motorway in the late 1950s. The performances, movements and durability of a diverse range of things-including earth-moving machines, public relations brochures, maps, helicopters, senior engineers, aggregate and labourers-are shown to be important to the construction and ordering of the motorway and spaces of the construction company in different times and spaces, with people's experiences or understandings of construction, both now and in the past, emerging through memories, talk and embodied encounters with architectures, texts and artefacts which are assembled, circulated and/or archived. Aerial perspectives assumed a prominent role in depictions of construction, while journalists and engineers frequently drew upon a military vocabulary and alluded to the military nature of the project when discussing the motorway. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
Reports the factor-filtering and primality-testing of Mersenne Numbers Mp for p < 100000, the latter using the ICL 'DAP' Distributed Array Processor.
Resumo:
For those few readers who do not know, CAFS is a system developed by ICL to search through data at speeds of several million characters per second. Its full name is Content Addressable File Store Information Search Processor, CAFS-ISP or CAFS for short. It is an intelligent hardware-based searching engine, currently available with both ICL's 2966 family of computers and the recently announced Series 39, operating within the VME environment. It uses content addressing techniques to perform fast searches of data or text stored on discs: almost all fields are equally accessible as search keys. Software in the mainframe generates a search task; the CAFS hardware performs the search, and returns the hit records to the mainframe. Because special hardware is used, the searching process is very much more efficient than searching performed by any software method. Various software interfaces are available which allow CAFS to be used in many different situations. CAFS can be used with existing systems without significant change. It can be used to make online enquiries of mainframe files or databases or directly from user written high level language programs. These interfaces are outlined in the body of the report.
Resumo:
We apply a new X-ray scattering approach to the study of melt-spun filaments of tri-block and random terpolymers prepared from lactide, caprolactone and glycolide. Both terpolymers contain random sequences, in both cases the overall fraction of lactide units is similar to 0.7 and C-13 and H-1 NMR shows the lactide sequence length to be similar to 9-10. A novel representation of the X-ray fibre pattern as series of spherical harmonic functions considerably facilitates the comparison of the scattering from the minority crystalline phase with hot drawn fibres prepared from the poly(L-lactide) homopolymer. Although the fibres exhibit rather disordered structures we show that the crystal structure is equivalent to that displayed by poly(L-lactide) for both the block and random terpolymers. There are variations in the development of a two-phase structure which reflect the differences in the chain architectures. There is evidence that the random terpolymer includes non-lactide units in to the crystal interfaces to achieve a well defined two-phase structure. (c) 2005 Published by Elsevier Ltd.
Resumo:
The concept of “working” memory is traceable back to nineteenth century theorists (Baldwin, 1894; James 1890) but the term itself was not used until the mid-twentieth century (Miller, Galanter & Pribram, 1960). A variety of different explanatory constructs have since evolved which all make use of the working memory label (Miyake & Shah, 1999). This history is briefly reviewed and alternative formulations of working memory (as language-processor, executive attention, and global workspace) are considered as potential mechanisms for cognitive change within and between individuals and between species. A means, derived from the literature on human problem-solving (Newell & Simon, 1972), of tracing memory and computational demands across a single task is described and applied to two specific examples of tool-use by chimpanzees and early hominids. The examples show how specific proposals for necessary and/or sufficient computational and memory requirements can be more rigorously assessed on a task by task basis. General difficulties in connecting cognitive theories (arising from the observed capabilities of individuals deprived of material support) with archaeological data (primarily remnants of material culture) are discussed.
Resumo:
An information processor for rendering input data compatible with standard video recording and/or display equipment, comprizing means for digitizing the input data over periods which are synchronous with the fields of a standard video signal, a store adapted to store the digitized data and release stored digitized data in correspondence wiht the line scan of a standard video monitor, the store having two halves which correspond to the interlaced fields of a standard video signal and being so arranged that one half is filed while the other is emptied, and means for converting the released stored digitized data into video luminance signals. The input signals may be in digital or analogue form. A second stage which reconstitutes the recorded data is also described.
Resumo:
The principles of operation of an experimental prototype instrument known as J-SCAN are described along with the derivation of formulae for the rapid calculation of normalized impedances; the structure of the instrument; relevant probe design parameters; digital quantization errors; and approaches for the optimization of single frequency operation. An eddy current probe is used As the inductance element of a passive tuned-circuit which is repeatedly excited with short impulses. Each impulse excites an oscillation which is subject to decay dependent upon the values of the tuned-circuit components: resistance, inductance and capacitance. Changing conditions under the probe that affect the resistance and inductance of this circuit will thus be detected through changes in the transient response. These changes in transient response, oscillation frequency and rate of decay, are digitized, and then normalized values for probe resistance and inductance changes are calculated immediately in a micro processor. This approach coupled with a minimum analogue processing and maximum of digital processing has advantages compared with the conventional approaches to eddy current instruments. In particular there are: the absence of an out of balance condition and the flexibility and stability of digital data processing.
Resumo:
Recently, two approaches have been introduced that distribute the molecular fragment mining problem. The first approach applies a master/worker topology, the second approach, a completely distributed peer-to-peer system, solves the scalability problem due to the bottleneck at the master node. However, in many real world scenarios the participating computing nodes cannot communicate directly due to administrative policies such as security restrictions. Thus, potential computing power is not accessible to accelerate the mining run. To solve this shortcoming, this work introduces a hierarchical topology of computing resources, which distributes the management over several levels and adapts to the natural structure of those multi-domain architectures. The most important aspect is the load balancing scheme, which has been designed and optimized for the hierarchical structure. The approach allows dynamic aggregation of heterogenous computing resources and is applied to wide area network scenarios.
Resumo:
This paper focuses on active networks applications and in particular on the possible interactions among these applications. Active networking is a very promising research field which has been developed recently, and which poses several interesting challenges to network designers. A number of proposals for e±cient active network architectures are already to be found in the literature. However, how two or more active network applications may interact has not being investigated so far. In this work, we consider a number of applications that have been designed to exploit the main features of active networks and we discuss what are the main benefits that these applications may derive from them. Then, we introduce some forms of interaction including interference and communications among applications, and identify the components of an active network architecture that are needed to support these forms of interaction. We conclude by presenting a brief example of an active network application exploiting the concept of interaction.
Resumo:
New conceptual ideas on network architectures have been proposed in the recent past. Current store-andforward routers are replaced by active intermediate systems, which are able to perform computations on transient packets, in a way that results very helpful for developing and deploying new protocols in a short time. This paper introduces a new routing algorithm, based on a congestion metric, and inspired by the behavior of ants in nature. The use of the Active Networks paradigm associated with a cooperative learning environment produces a robust, decentralized algorithm capable of adapting quickly to changing conditions.
Resumo:
Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.
Resumo:
BACKGROUND: The bacterial biothreat agents Burkholderia mallei and Burkholderia pseudomallei are the cause of glanders and melioidosis, respectively. Genomic and epidemiological studies have shown that B. mallei is a recently emerged, host restricted clone of B. pseudomallei. RESULTS: Using bacteriophage-mediated immunoscreening we identified genes expressed in vivo during experimental equine glanders infection. A family of immunodominant antigens were identified that share protein domain architectures with hemagglutinins and invasins. These have been designated Burkholderia Hep_Hag autotransporter (BuHA) proteins. A total of 110/207 positive clones (53%) of a B. mallei expression library screened with sera from two infected horses belonged to this family. This contrasted with 6/189 positive clones (3%) of a B. pseudomallei expression library screened with serum from 21 patients with culture-proven melioidosis. CONCLUSION: Members of the BuHA proteins are found in other Gram-negative bacteria and have been shown to have important roles related to virulence. Compared with other bacterial species, the genomes of both B. mallei and B. pseudomallei contain a relative abundance of this family of proteins. The domain structures of these proteins suggest that they function as multimeric surface proteins that modulate interactions of the cell with the host and environment. Their effect on the cellular immune response to B. mallei and their potential as diagnostics for glanders requires further study.
Resumo:
Dendrimers and hyperbranched polymers are a relatively new class of materials with unique molecular architectures and dimensions in comparison to traditional linear polymers. This review details recent notable advances in the application of these new polymers in terms of the development of new polymeric delivery systems. Although comparatively young, the developing field of hyperbranched drug delivery devices is a rapidly maturing area and the key discoveries in drug-conjugate systems amongst others are highlighted. As a consequence of their ideal hyperbranched architectures, the utilisation of host-guest chemistries in dendrimers has been included within the scope of this review. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Helices and sheets are ubiquitous in nature. However, there are also some examples of self-assembling molecules forming supramolecular helices and sheets in unnatural systems. Unlike supramolecular sheets there are a very few examples of peptide sub-units that can be used to construct supramolecular helical architectures using the backbone hydrogen bonding functionalities of peptides. In this report we describe the design and synthesis of two single turn/bend forming peptides (Boc-Phe-Aib-Ile-OMe 1 and Boc-Ala-Leu-Aib-OMe 2) (Aib: alpha-aminoisobutyric acid) and a series of double-turn forming peptides (Boc-Phe-Aib-IIe-Aib-OMe 3, Boc-Leu-Aib-Gly-Aib-OMe 4 and Boc-gamma-Abu-Aib-Leu-Aib-OMe 5) (gamma-Abu: gamma-aminobutyric acid). It has been found that, in crystals, on self-assembly, single turn/bend forming peptides form either a supramolecular sheet (peptide 1) or a supramolecular helix (peptide 2). unlike self-associating double turn forming peptides, which have only the option of forming supramolecular helical assemblages. (c) 2005 Elsevier Ltd. All rights reserved.