18 resultados para NARROW BANDWIDTH
em CentAUR: Central Archive University of Reading - UK
Resumo:
Current force feedback, haptic interface devices are generally limited to the display of low frequency, high amplitude spatial data. A typical device consists of a low impedance framework of one or more degrees-of-freedom (dof), allowing a user to explore a pre-defined workspace via an end effector such as a handle, thimble, probe or stylus. The movement of the device is then constrained using high gain positional feedback, thus reducing the apparent dof of the device and conveying the illusion of hard contact to the user. Such devices are, however, limited to a narrow bandwidth of frequencies, typically below 30Hz, and are not well suited to the display of surface properties, such as object texture. This paper details a device to augment an existing force feedback haptic display with a vibrotactile display, thus providing a means of conveying low amplitude, high frequency spatial information of object surface properties. 1. Haptics and Haptic Interfaces Haptics is the study of human touch and interaction with the external environment via touch. Information from the human sense of touch can be classified in to two categories, cutaneous and kinesthetic. Cutaneous information is provided via the mechanoreceptive nerve endings in the glabrous skin of the human hand. It is primarily a means of relaying information regarding small-scale details in the form of skin stretch, compression and vibration.
Resumo:
Our latest research indicates that narrow bandpass filters of ~0.6% bandwidth (or any larger chosen width) and with good performance ar low temperature and in tilted and focused illumination can be realized by using multicavities and multimaterials.
Resumo:
Faced by the realities of a changing climate, decision makers in a wide variety of organisations are increasingly seeking quantitative predictions of regional and local climate. An important issue for these decision makers, and for organisations that fund climate research, is what is the potential for climate science to deliver improvements - especially reductions in uncertainty - in such predictions? Uncertainty in climate predictions arises from three distinct sources: internal variability, model uncertainty and scenario uncertainty. Using data from a suite of climate models we separate and quantify these sources. For predictions of changes in surface air temperature on decadal timescales and regional spatial scales, we show that uncertainty for the next few decades is dominated by sources (model uncertainty and internal variability) that are potentially reducible through progress in climate science. Furthermore, we find that model uncertainty is of greater importance than internal variability. Our findings have implications for managing adaptation to a changing climate. Because the costs of adaptation are very large, and greater uncertainty about future climate is likely to be associated with more expensive adaptation, reducing uncertainty in climate predictions is potentially of enormous economic value. We highlight the need for much more work to compare: a) the cost of various degrees of adaptation, given current levels of uncertainty; and b) the cost of new investments in climate science to reduce current levels of uncertainty. Our study also highlights the importance of targeting climate science investments on the most promising opportunities to reduce prediction uncertainty.
Resumo:
We separate and quantify the sources of uncertainty in projections of regional (*2,500 km) precipitation changes for the twenty-first century using the CMIP3 multi-model ensemble, allowing a direct comparison with a similar analysis for regional temperature changes. For decadal means of seasonal mean precipitation, internal variability is the dominant uncertainty for predictions of the first decade everywhere, and for many regions until the third decade ahead. Model uncertainty is generally the dominant source of uncertainty for longer lead times. Scenario uncertainty is found to be small or negligible for all regions and lead times, apart from close to the poles at the end of the century. For the global mean, model uncertainty dominates at all lead times. The signal-to-noise ratio (S/N) of the precipitation projections is highest at the poles but less than 1 almost everywhere else, and is far lower than for temperature projections. In particular, the tropics have the highest S/N for temperature, but the lowest for precipitation. We also estimate a ‘potential S/N’ by assuming that model uncertainty could be reduced to zero, and show that, for regional precipitation, the gains in S/N are fairly modest, especially for predictions of the next few decades. This finding suggests that adaptation decisions will need to be made in the context of high uncertainty concerning regional changes in precipitation. The potential to narrow uncertainty in regional temperature projections is far greater. These conclusions on S/N are for the current generation of models; the real signal may be larger or smaller than the CMIP3 multi-model mean. Also note that the S/N for extreme precipitation, which is more relevant for many climate impacts, may be larger than for the seasonal mean precipitation considered here.
Resumo:
The principles of operation of an experimental prototype instrument known as J-SCAN are described along with the derivation of formulae for the rapid calculation of normalized impedances; the structure of the instrument; relevant probe design parameters; digital quantization errors; and approaches for the optimization of single frequency operation. An eddy current probe is used As the inductance element of a passive tuned-circuit which is repeatedly excited with short impulses. Each impulse excites an oscillation which is subject to decay dependent upon the values of the tuned-circuit components: resistance, inductance and capacitance. Changing conditions under the probe that affect the resistance and inductance of this circuit will thus be detected through changes in the transient response. These changes in transient response, oscillation frequency and rate of decay, are digitized, and then normalized values for probe resistance and inductance changes are calculated immediately in a micro processor. This approach coupled with a minimum analogue processing and maximum of digital processing has advantages compared with the conventional approaches to eddy current instruments. In particular there are: the absence of an out of balance condition and the flexibility and stability of digital data processing.
Resumo:
Future stratospheric ozone concentrations will be determined both by changes in the concentration of ozone depleting substances (ODSs) and by changes in stratospheric and tropospheric climate, including those caused by changes in anthropogenic greenhouse gases (GHGs). Since future economic development pathways and resultant emissions of GHGs are uncertain, anthropogenic climate change could be a significant source of uncertainty for future projections of stratospheric ozone. In this pilot study, using an "ensemble of opportunity" of chemistry-climate model (CCM) simulations, the contribution of scenario uncertainty from different plausible emissions pathways for ODSs and GHGs to future ozone projections is quantified relative to the contribution from model uncertainty and internal variability of the chemistry-climate system. For both the global, annual mean ozone concentration and for ozone in specific geographical regions, differences between CCMs are the dominant source of uncertainty for the first two-thirds of the 21st century, up-to and after the time when ozone concentrations return to 1980 values. In the last third of the 21st century, dependent upon the set of greenhouse gas scenarios used, scenario uncertainty can be the dominant contributor. This result suggests that investment in chemistry-climate modelling is likely to continue to refine projections of stratospheric ozone and estimates of the return of stratospheric ozone concentrations to pre-1980 levels.
Resumo:
It is well understood that for haptic interaction: free motion performance and closed-loop constrained motion performance have conflicting requirements. The difficulties for both conditions are compounded when increased workspace is required as most solutions result in a reduction of achievable impedance and bandwidth. A method of chaining devices together to increase workspace without adverse effect on performance is described and analysed. The method is then applied to a prototype, colloquially known as 'The Flying Phantom', and shown to provide high-bandwidth, low impedance interaction over the full range of horizontal movement across the front of a human user.
Resumo:
This paper describes the design and manufacture of a set of precision cooled (210K) narrow-bandpass filters for the infrared imager and sounder on the Indian Space Research Organisation (ISRO) INSAT-3D meteorological satellite. We discuss the basis for the choice of multilayer coating designs and materials for 21 differing filter channels, together with their temperature-dependence, thin film deposition technologies, substrate metrology, and environmental durability performance. (C) 2008 Optical Society of America.
Resumo:
With continually increasing demands for improvements to atmospheric and planetary remote-sensing instrumentation, for both high optical system performance and extended operational lifetimes, an investigation to access the effects of prolonged exposure of the space environment to a series of infrared interference filters and optical materials was promoted on the NASA LDEF mission. The NASA Long Duration Exposure Facility (LDEF) was launchd by the Space Shuttle to transport various science and technology experiments both to and from space, providing investigators with the opportunity to study the effects of the space environment on materials and systems used in space-flight applications. Preliminary results to be discussed consist of transmission measurements obtained and processed from an infrared spectrophotometer both before (1983) and after (1990) exposure compared with unexposed control specimens, together with results of detailed microscopic and general visual examinations performed on the experiment. The principle lead telluride (PbTe) and Zinc Sulphide (ZnS) based multilayer filters selected for this preliminary investigation consist of : an 8-12µm low pass edge filter, a 10.6µm 2.5% half bandwidth (HBW) double half-wave narrow bandpass filter, and a 10% HBW triple half-wave wide bandpass filter at 15µm. Optical substrates of MgF2 and KRS-5 (T1BrI) will also be discussed.
Resumo:
By using simulation methods, we studied the adsorption of binary CO2-CH4 mixtures on various CH4 preadsorbed carbonaceous materials (e.g., triply periodic carbon minimal surfaces, slit-shaped carbon micropores, and Harris's virtual porous carbons) at 293 K. Regardless of the different micropore geometry, two-stage mechanism of CH4 displacement from carbon nanospaces by coadsorbed CO2 has been proposed. In the first stage, the coadsorbed CO2 molecules induced the enhancement of CH4 adsorbed amount. In the second stage, the stronger affinity of CO2 to flat/curved graphitic surfaces as well as CO2-CO2 interactions cause the displacement of CH4 molecules from carbonaceous materials. The operating conditions of CO2-induced cleaning of the adsorbed phase from CH4 mixture component strongly depend on the size of the carbon micropores, but, in general, the enhanced adsorption field in narrow carbon ultramicropores facilitates the nonreactive displacement of CH4 by coadsorbed CO2. This is because in narrow carbon ultramicropores the equilibrium CO2/CH4 selectivity (i.e., preferential adsorption toward CO2) increased significantly. The adsorption field in wider micropores (i.e., the overall surface energy) for both CO2 and CH4 is very similar, which decreases the preferential CO2 adsorption. This suppresses the displacement of CH4 by coadsorbed CO2 and assists further adsorption of CH4 from the bulk mixture (i.e., CO2/CH4 mixing in adsorbed phase).
Resumo:
The discrete Fourier transmission spread OFDM DFTS-OFDM) based single-carrier frequency division multiple access (SC-FDMA) has been widely adopted due to its lower peak-to-average power ratio (PAPR) of transmit signals compared with OFDM. However, the offset modulation, which has lower PAPR than general modulation, cannot be directly applied into the existing SC-FDMA. When pulse-shaping filters are employed to further reduce the envelope fluctuation of transmit signals of SC-FDMA, the spectral efficiency degrades as well. In order to overcome such limitations of conventional SC-FDMA, this paper for the first time investigated cyclic prefixed OQAMOFDM (CP-OQAM-OFDM) based SC-FDMA transmission with adjustable user bandwidth and space-time coding. Firstly, we propose CP-OQAM-OFDM transmission with unequally-spaced subbands. We then apply it to SC-FDMA transmission and propose a SC-FDMA scheme with the following features: a) the transmit signal of each user is offset modulated single-carrier with frequency-domain pulse-shaping; b) the bandwidth of each user is adjustable; c) the spectral efficiency does not decrease with increasing roll-off factors. To combat both inter-symbolinterference and multiple access interference in frequencyselective fading channels, a joint linear minimum mean square error frequency domain equalization using a prior information with low complexity is developed. Subsequently, we construct space-time codes for the proposed SC-FDMA. Simulation results confirm the powerfulness of the proposed CP-OQAM-OFDM scheme (i.e., effective yet with low complexity).