856 resultados para Error-location numbers


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, multiple-input multiple-output (MIMO) transmit beamforming (TB) systems under the consideration of nonlinear high-power amplifiers (HPAs) are investigated. The optimal beamforming scheme, with the optimal beamforming weight vector and combining vector, is proposed for MIMO systems with HPA nonlinearity. The performance of the proposed MIMO beamforming scheme in the presence of HPA nonlinearity is evaluated in terms of average symbol error probability (SEP), outage probability and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, namely, parameters of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of phase-shift keying (PSK), on performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In practice, all I/Q signal processing receivers face the problem of I/Q imbalance. In this paper, we investigate the effect of I/Q imbalance on the performance of MIMO maximal ratio combining (MRC) systems that perform the combining at the radio frequency (RF) level, thereby requiring only one RF chain. Based on a system modeling that takes the I/Q imbalance into account, we evaluate the performance in terms of average symbol error probability (SEP), outage probability and system capacity, which are derived considering transmission over uncorrelated Rayleigh fading channels. Numerical results are provided to illustrate the effects of system parameters, such as the image- leakage ratio, numbers of transmit and receive antennas, and modulation order of quadrature amplitude modulation (QAM), on the system performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonlinearity of high-power amplifier (HPA) plays a crucial role in the performance of multiple-input multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (STBC) systems in the presence of nonlinear HPA. Specifically, we assess the impact of HPA nonlinearity on the average symbol error probability (SEP), total degradation (TD), and system capacity of orthogonal STBC in uncorrelated Nakagami-m fading channels. Numerical results are provided and show the effects of several system parameters, such as the output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of quadrature amplitude modulation (QAM), on performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability of six scanning cloud radar scan strategies to reconstruct cumulus cloud fields for radiation study is assessed. Utilizing snapshots of clean and polluted cloud fields from large eddy simulations, an analysis is undertaken of error in both the liquid water path and monochromatic downwelling surface irradiance at 870 nm of the reconstructed cloud fields. Error introduced by radar sensitivity, choice of radar scan strategy, retrieval of liquid water content (LWC), and reconstruction scheme is explored. Given an in␣nitely sensitive radar and perfect LWC retrieval, domain average surface irradiance biases are typically less than 3 W m␣2 ␣m␣1, corresponding to 5–10% of the cloud radiative effect (CRE). However, when using a realistic radar sensitivity of ␣37.5 dBZ at 1 km, optically thin areas and edges of clouds are dif␣cult to detect due to their low radar re-ectivity; in clean conditions, overestimates are of order 10 W m␣2 ␣m␣1 (~20% of the CRE), but in polluted conditions, where the droplets are smaller, this increases to 10–26 W m␣2 ␣m␣1 (~40–100% of the CRE). Drizzle drops are also problematic; if treated as cloud droplets, reconstructions are poor, leading to large underestimates of 20–46 W m␣2 ␣m␣1 in domain average surface irradiance (~40–80% of the CRE). Nevertheless, a synergistic retrieval approach combining the detailed cloud structure obtained from scanning radar with the droplet-size information and location of cloud base gained from other instruments would potentially make accurate solar radiative transfer calculations in broken cloud possible for the first time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies, such as intercept corrections or differencing, when location shifts occur as in the recent financial crisis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal utilisation of hyper-spectral satellite observations in numerical weather prediction is often inhibited by incorrectly assuming independent interchannel observation errors. However, in order to represent these observation-error covariance structures, an accurate knowledge of the true variances and correlations is needed. This structure is likely to vary with observation type and assimilation system. The work in this article presents the initial results for the estimation of IASI interchannel observation-error correlations when the data are processed in the Met Office one-dimensional (1D-Var) and four-dimensional (4D-Var) variational assimilation systems. The method used to calculate the observation errors is a post-analysis diagnostic which utilises the background and analysis departures from the two systems. The results show significant differences in the source and structure of the observation errors when processed in the two different assimilation systems, but also highlight some common features. When the observations are processed in 1D-Var, the diagnosed error variances are approximately half the size of the error variances used in the current operational system and are very close in size to the instrument noise, suggesting that this is the main source of error. The errors contain no consistent correlations, with the exception of a handful of spectrally close channels. When the observations are processed in 4D-Var, we again find that the observation errors are being overestimated operationally, but the overestimation is significantly larger for many channels. In contrast to 1D-Var, the diagnosed error variances are often larger than the instrument noise in 4D-Var. It is postulated that horizontal errors of representation, not seen in 1D-Var, are a significant contributor to the overall error here. Finally, observation errors diagnosed from 4D-Var are found to contain strong, consistent correlation structures for channels sensitive to water vapour and surface properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The observation-error covariance matrix used in data assimilation contains contributions from instrument errors, representativity errors and errors introduced by the approximated observation operator. Forward model errors arise when the observation operator does not correctly model the observations or when observations can resolve spatial scales that the model cannot. Previous work to estimate the observation-error covariance matrix for particular observing instruments has shown that it contains signifcant correlations. In particular, correlations for humidity data are more significant than those for temperature. However it is not known what proportion of these correlations can be attributed to the representativity errors. In this article we apply an existing method for calculating representativity error, previously applied to an idealised system, to NWP data. We calculate horizontal errors of representativity for temperature and humidity using data from the Met Office high-resolution UK variable resolution model. Our results show that errors of representativity are correlated and more significant for specific humidity than temperature. We also find that representativity error varies with height. This suggests that the assimilation scheme may be improved if these errors are explicitly included in a data assimilation scheme. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rising share of intangibles in economies worldwide highlights the crucial role of knowledge-intensive and creative industries in current and future wealth generation. The recognition of this trend has led to intense competition in these industries. At the micro-level, firms from both advanced and emerging economies are globally dispersing their value chains to control costs and leverage capabilities. The geography of innovation is the outcome of a dynamic process whereby firms from emerging economies strive to catch-up with advanced economy competitors, creating strong pressures for continued innovation. However, two distinct strategies can be discerned with regard to the control of the value chain. A vertical integration strategy emphasizes taking advantage of ‘linkage economies’ whereby controlling multiple value chain activities enhances the efficiency and effectiveness of each one of them. In contrast, a specialization strategy focuses on identifying and controlling the creative heart of the value chain, while outsourcing all other activities. The global mobile handset industry is used as the template to illustrate the theory.