958 resultados para Eguchi-hanson Metric
Resumo:
Single-carrier frequency division multiple access (SC-FDMA) has appeared to be a promising technique for high data rate uplink communications. Aimed at SC-FDMA applications, a cyclic prefixed version of the offset quadrature amplitude modulation based OFDM (OQAM-OFDM) is first proposed in this paper. We show that cyclic prefixed OQAMOFDM CP-OQAM-OFDM) can be realized within the framework of the standard OFDM system, and perfect recovery condition in the ideal channel is derived. We then apply CP-OQAMOFDM to SC-FDMA transmission in frequency selective fading channels. Signal model and joint widely linear minimum mean square error (WLMMSE) equalization using a prior information with low complexity are developed. Compared with the existing DFTS-OFDM based SC-FDMA, the proposed SC-FDMA can significantly reduce envelope fluctuation (EF) of the transmitted signal while maintaining the bandwidth efficiency. The inherent structure of CP-OQAM-OFDM enables low-complexity joint equalization in the frequency domain to combat both the multiple access interference and the intersymbol interference. The joint WLMMSE equalization using a prior information guarantees optimal MMSE performance and supports Turbo receiver for improved bit error rate (BER) performance. Simulation resultsconfirm the effectiveness of the proposed SC-FDMA in termsof EF (including peak-to-average power ratio, instantaneous-toaverage power ratio and cubic metric) and BER performances.
Resumo:
The emergence of high-density wireless local area network (WLAN) deployments in recent years is a testament to the insatiable demands for wireless broadband services. The increased density of WLAN deployments brings with it the potential of increased capacity, extended coverage, and exciting new applications. However, the corresponding increase in contention and interference can significantly degrade throughputs, unless new challenges in channel assignment are effectively addressed. In this paper, a client-assisted channel assignment scheme that can provide enhanced throughput is proposed. A study on the impact of interference on throughput with multiple access points (APs)is first undertaken using a novel approach that determines the possibility of parallel transmissions. A metric with a good correlation to the throughput, i.e., the number of conflict pairs, is used in the client-assisted minimum conflict pairs (MICPA) scheme. In this scheme, measurements from clients are used to assist the AP in determining the channel with the minimum number of conflict pairs to maximize its expected throughput. Simulation results show that the client-assisted MICPA scheme can provide meaningful throughput improvements over other schemes that only utilize the AP’s measurements.
Resumo:
This work investigated the personal exposure to indoor particulate matters using the intake fraction metric and provided a possible way to trace the particle inhaled from an indoor particle source. A turbulence model validated by the particle measurements in a room with underfloor air distribution (UFAD) system was used to predict the indoor particle concentrations. Inhalation intake fraction of indoor particles was defined and evaluated in two rooms equipped with the UFAD, i.e., the experimental room and a small office. According to the exposure characteristics and a typical respiratory rate, the intake fraction was determined in two rooms with a continuous and episodic (human cough) source of particles, respectively. The findings showed that the well-mixing assumption of indoor air failed to give an accurate estimation of inhalation exposure and the average concentration at return outlet or within the overall room could not relate well the intake fraction to the amount of particle emitted from an indoor source.
Resumo:
Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.
Resumo:
The goal was to quantitatively estimate and compare the fidelity of images acquired with a digital imaging system (ADAR 5500) and generated through scanning of color infrared aerial photographs (SCIRAP) using image-based metrics. Images were collected nearly simultaneously in two repetitive flights to generate multi-temporal datasets. Spatial fidelity of ADAR was lower than that of SCIRAP images. Radiometric noise was higher for SCIRAP than for ADAR images, even though noise from misregistration effects was lower. These results suggest that with careful control of film scanning, the overall fidelity of SCIRAP imagery can be comparable to that of digital multispectral camera data. Therefore, SCIRAP images can likely be used in conjunction with digital metric camera imagery in long-term landcover change analyses.
Resumo:
As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.
Resumo:
The psychometric properties of scores from the Achievement Goal Questionnaire were examined in samples of Japanese (N = 326) and Canadian (N = 307) post secondary students. Previous research found evidence of a four-factor structure of achievement goals in U.S. samples. Using confirmatory factor-analytic techniques, the authors found strong evidence for the four-factor structure of achievement goals in both the Canadian and Japanese populations. Subsequent multi group structural equation modeling indicated the metric invariance of this four-factor structure across the two populations.
Resumo:
Consider the massless Dirac operator on a 3-torus equipped with Euclidean metric and standard spin structure. It is known that the eigenvalues can be calculated explicitly: the spectrum is symmetric about zero and zero itself is a double eigenvalue. The aim of the paper is to develop a perturbation theory for the eigenvalue with smallest modulus with respect to perturbations of the metric. Here the application of perturbation techniques is hindered by the fact that eigenvalues of the massless Dirac operator have even multiplicity, which is a consequence of this operator commuting with the antilinear operator of charge conjugation (a peculiar feature of dimension 3). We derive an asymptotic formula for the eigenvalue with smallest modulus for arbitrary perturbations of the metric and present two particular families of Riemannian metrics for which the eigenvalue with smallest modulus can be evaluated explicitly. We also establish a relation between our asymptotic formula and the eta invariant.
Resumo:
Over the last decade, due to the Gravity Recovery And Climate Experiment (GRACE) mission and, more recently, the Gravity and steady state Ocean Circulation Explorer (GOCE) mission, our ability to measure the ocean’s mean dynamic topography (MDT) from space has improved dramatically. Here we use GOCE to measure surface current speeds in the North Atlantic and compare our results with a range of independent estimates that use drifter data to improve small scales. We find that, with filtering, GOCE can recover 70% of the Gulf Steam strength relative to the best drifter-based estimates. In the subpolar gyre the boundary currents obtained from GOCE are close to the drifter-based estimates. Crucial to this result is careful filtering which is required to remove small-scale errors, or noise, in the computed surface. We show that our heuristic noise metric, used to determine the degree of filtering, compares well with the quadratic sum of mean sea surface and formal geoid errors obtained from the error variance–covariance matrix associated with the GOCE gravity model. At a resolution of 100 km the North Atlantic mean GOCE MDT error before filtering is 5 cm with almost all of this coming from the GOCE gravity model.
Resumo:
We present here a straightforward method which can be used to obtain a quantitative indication of an individual research output for an academic. Different versions, selections and options are presented to enable a user to easily calculate values both for stand-alone papers and overall for the collection of outputs for a person. The procedure is particularly useful as a metric to give a quantitative indication of the research output of a person over a time window. Examples are included to show how the method works in practice and how it compares to alternative techniques.
Resumo:
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.
Resumo:
Persistent contrails are an important climate impact of aviation which could potentially be reduced by re-routing aircraft to avoid contrailing; however this generally increases both the flight length and its corresponding CO emissions. Here, we provide a simple framework to assess the trade-off between the climate impact of CO emissions and contrails for a single flight, in terms of the absolute global warming potential and absolute global temperature potential metrics for time horizons of 20, 50 and 100 years. We use the framework to illustrate the maximum extra distance (with no altitude changes) that can be added to a flight and still reduce its overall climate impact. Small aircraft can fly up to four times further to avoid contrailing than large aircraft. The results have a strong dependence on the applied metric and time horizon. Applying a conservative estimate of the uncertainty in the contrail radiative forcing and climate efficacy leads to a factor of 20 difference in the maximum extra distance that could be flown to avoid a contrail. The impact of re-routing on other climatically-important aviation emissions could also be considered in this framework.
Resumo:
Data assimilation methods which avoid the assumption of Gaussian error statistics are being developed for geoscience applications. We investigate how the relaxation of the Gaussian assumption affects the impact observations have within the assimilation process. The effect of non-Gaussian observation error (described by the likelihood) is compared to previously published work studying the effect of a non-Gaussian prior. The observation impact is measured in three ways: the sensitivity of the analysis to the observations, the mutual information, and the relative entropy. These three measures have all been studied in the case of Gaussian data assimilation and, in this case, have a known analytical form. It is shown that the analysis sensitivity can also be derived analytically when at least one of the prior or likelihood is Gaussian. This derivation shows an interesting asymmetry in the relationship between analysis sensitivity and analysis error covariance when the two different sources of non-Gaussian structure are considered (likelihood vs. prior). This is illustrated for a simple scalar case and used to infer the effect of the non-Gaussian structure on mutual information and relative entropy, which are more natural choices of metric in non-Gaussian data assimilation. It is concluded that approximating non-Gaussian error distributions as Gaussian can give significantly erroneous estimates of observation impact. The degree of the error depends not only on the nature of the non-Gaussian structure, but also on the metric used to measure the observation impact and the source of the non-Gaussian structure.