927 resultados para graphics processor
Resumo:
Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.
Resumo:
Reports the factor-filtering and primality-testing of Mersenne Numbers Mp for p < 100000, the latter using the ICL 'DAP' Distributed Array Processor.
Resumo:
For those few readers who do not know, CAFS is a system developed by ICL to search through data at speeds of several million characters per second. Its full name is Content Addressable File Store Information Search Processor, CAFS-ISP or CAFS for short. It is an intelligent hardware-based searching engine, currently available with both ICL's 2966 family of computers and the recently announced Series 39, operating within the VME environment. It uses content addressing techniques to perform fast searches of data or text stored on discs: almost all fields are equally accessible as search keys. Software in the mainframe generates a search task; the CAFS hardware performs the search, and returns the hit records to the mainframe. Because special hardware is used, the searching process is very much more efficient than searching performed by any software method. Various software interfaces are available which allow CAFS to be used in many different situations. CAFS can be used with existing systems without significant change. It can be used to make online enquiries of mainframe files or databases or directly from user written high level language programs. These interfaces are outlined in the body of the report.
Resumo:
The concept of “working” memory is traceable back to nineteenth century theorists (Baldwin, 1894; James 1890) but the term itself was not used until the mid-twentieth century (Miller, Galanter & Pribram, 1960). A variety of different explanatory constructs have since evolved which all make use of the working memory label (Miyake & Shah, 1999). This history is briefly reviewed and alternative formulations of working memory (as language-processor, executive attention, and global workspace) are considered as potential mechanisms for cognitive change within and between individuals and between species. A means, derived from the literature on human problem-solving (Newell & Simon, 1972), of tracing memory and computational demands across a single task is described and applied to two specific examples of tool-use by chimpanzees and early hominids. The examples show how specific proposals for necessary and/or sufficient computational and memory requirements can be more rigorously assessed on a task by task basis. General difficulties in connecting cognitive theories (arising from the observed capabilities of individuals deprived of material support) with archaeological data (primarily remnants of material culture) are discussed.
Resumo:
An information processor for rendering input data compatible with standard video recording and/or display equipment, comprizing means for digitizing the input data over periods which are synchronous with the fields of a standard video signal, a store adapted to store the digitized data and release stored digitized data in correspondence wiht the line scan of a standard video monitor, the store having two halves which correspond to the interlaced fields of a standard video signal and being so arranged that one half is filed while the other is emptied, and means for converting the released stored digitized data into video luminance signals. The input signals may be in digital or analogue form. A second stage which reconstitutes the recorded data is also described.
Resumo:
The principles of operation of an experimental prototype instrument known as J-SCAN are described along with the derivation of formulae for the rapid calculation of normalized impedances; the structure of the instrument; relevant probe design parameters; digital quantization errors; and approaches for the optimization of single frequency operation. An eddy current probe is used As the inductance element of a passive tuned-circuit which is repeatedly excited with short impulses. Each impulse excites an oscillation which is subject to decay dependent upon the values of the tuned-circuit components: resistance, inductance and capacitance. Changing conditions under the probe that affect the resistance and inductance of this circuit will thus be detected through changes in the transient response. These changes in transient response, oscillation frequency and rate of decay, are digitized, and then normalized values for probe resistance and inductance changes are calculated immediately in a micro processor. This approach coupled with a minimum analogue processing and maximum of digital processing has advantages compared with the conventional approaches to eddy current instruments. In particular there are: the absence of an out of balance condition and the flexibility and stability of digital data processing.
Resumo:
Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.
Resumo:
This paper describes and analyses the experience of designing, installing and evaluating a farmer-usable touch screen information kiosk on cattle health in a veterinary institution in Pondicherry. The contents of the kiosk were prepared based on identified demands for information on cattle health, arrived at through various stakeholders meetings. Information on these cattle diseases and conditions affecting the livelihoods of the poor was provided through graphics, text and audio back-up, keeping in mind the needs of landless and illiterate poor cattle owners. A methodology for kiosk evaluation based on the feedback obtained from kiosk facilitator, critical group reflection and individual users was formulated. The formative evaluation reveals the potential strength this ICT has in transferring information to the cattle owners in a service delivery centre. Such information is vital in preventing diseases and helps cattle owners to present and treat their animals at an early stage of disease condition. This in turn helps prevent direct and indirect losses to the cattle owners. The study reveals how an information kiosk installed at a government institution as a freely accessible source of information to all farmers irrespective of their class and caste can help in transfer of information among poor cattle owners, provided periodic updating, interactivity and communication variability are taken care of. Being in the veterinary centre, the kiosk helps stimulate dialogue, and facilitates demand of services based on the information provided by the kiosk screens.
Resumo:
Time resolved studies of silylene, SiH2, generated by the 193 nm laser. ash photolysis of phenylsilane, have been carried out to obtain rate coefficients for its bimolecular reactions with methyl-, dimethyl- and trimethyl-silanes in the gas phase. The reactions were studied over the pressure range 3 - 100 Torr with SF6 as bath gas and at five temperatures in the range 300 - 625 K. Only slight pressure dependences were found for SiH2 + MeSiH3 ( 485 and 602 K) and for SiH2 + Me2SiH2 ( 600 K). The high pressure rate constants gave the following Arrhenius parameters: [GRAPHICS] These are consistent with fast, near to collision-controlled, association processes. RRKM modelling calculations are consistent with the observed pressure dependences ( and also the lack of them for SiH2 + Me3SiH). Ab initio calculations at both second order perturbation theory (MP2) and coupled cluster (CCSD(T)) levels, showed the presence of weakly-bound complexes along the reaction pathways. In the case of SiH2 + MeSiH3 two complexes, with different geometries, were obtained consistent with earlier studies of SiH2 + SiH4. These complexes were stabilised by methyl substitution in the substrate silane, but all had exceedingly low barriers to rearrangement to product disilanes. Although methyl groups in the substrate silane enhance the intrinsic SiH2 insertion rates, it is doubtful whether the intermediate complexes have a significant effect on the kinetics. A further calculation on the reaction MeSiH + SiH4 shows that the methyl substitution in the silylene should have a much more significant kinetic effect ( as observed in other studies).
Resumo:
[GRAPHICS] The synthesis of unsaturated beta-linked C-disaccharides by the Lewis acid-mediated reaction of 3-O-acetylated glycals with monosaccharide-derived alkenes is described. Deprotection and selective hydrogenation of an exocyclic carbon-carbon double, in the presence of an endocyclic double bond, for representative targets is also illustrated.
Resumo:
The combined effect of pressure and temperature on the rate of gelatinisation of starch present in Thai glutinous rice was investigated. Pressure was found to initiate gelatinisation when its value exceeded 200 MPa at ambient temperature. On the other hand, complete gelatinisation was observed at 500 and 600 MPa at 70 degrees C, when the rice was soaked in water under these conditions for 120 min. A first-order kinetic model describing the rate of gelatinisation was developed to estimate the values of the rate constants as a function of pressure and temperature in the range: 0.1-600 MPa and 20-70 degrees C. The model, based on the well-known Arrhenius and Eyring equations, assumed the form [GRAPHICS] The constants k(0), E-a, and Delta V were found to take values: 31.19 s(-1), 37.89 kJ mol(-1) and -9.98 cm(3) mol(-1), respectively. It was further noted that the extent of gelatinisation occurring at any time, temperature and pressure, could be exclusively correlated with the grain moisture content. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone. The methods are also limited to optical see-through HMDs. Building on our existing HMD calibration method [1], we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside an HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in various positions. The locations of image features on the calibration object are then re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the display’s intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner in both see-through and in non-see-through modes and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors and involves no error-prone human measurements.
Resumo:
Frequency recognition is an important task in many engineering fields such as audio signal processing and telecommunications engineering, for example in applications like Dual-Tone Multi-Frequency (DTMF) detection or the recognition of the carrier frequency of a Global Positioning, System (GPS) signal. This paper will present results of investigations on several common Fourier Transform-based frequency recognition algorithms implemented in real time on a Texas Instruments (TI) TMS320C6713 Digital Signal Processor (DSP) core. In addition, suitable metrics are going to be evaluated in order to ascertain which of these selected algorithms is appropriate for audio signal processing(1).
Resumo:
Ever since man invented writing he has used text to store and distribute his thoughts. With the advent of computers and the Internet the delivery of these messages has become almost instant. Textual conversations can now be had regardless of location or distance. Advances in computational power for 3D graphics are enabling Virtual Environments(VE) within which users can become increasingly more immersed. By opening these environments to other users such as initially through sharing these text conversations channels, we aim to extend the immersed experience into an online virtual community. This paper examines work that brings textual communications into the VE, enabling interaction between the real and virtual worlds.