118 resultados para Space-time block coding (STBC)
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
The anthropogenic heat emissions generated by human activities in London are analysed in detail for 2005–2008 and considered in context of long-term past and future trends (1970–2025). Emissions from buildings, road traffic and human metabolism are finely resolved in space (30 min) and time (200 × 200 m2). Software to compute and visualize the results is provided. The annual mean anthropogenic heat flux for Greater London is 10.9 W m−2 for 2005–2008, with the highest peaks in the central activities zone (CAZ) associated with extensive service industry activities. Towards the outskirts of the city, emissions from the domestic sector and road traffic dominate. Anthropogenic heat is mostly emitted as sensible heat, with a latent heat fraction of 7.3% and a heat-to-wastewater fraction of 12%; the implications related to the use of evaporative cooling towers are briefly addressed. Projections indicate a further increase of heat emissions within the CAZ in the next two decades related to further intensification of activities within this area.
Resumo:
This paper considers the dynamics of deposition around and across the causewayed enclosure at Etton, Cambridgeshire. As a result of detailed re-analysis (particularly refitting) of the pottery and flint assemblages from the site, it proved possible to shed new light both on the temporality of occupation and the character of deposition there. Certain aspects of our work challenge previous interpretations of the site, and of causewayed enclosures in general; but, just as importantly, others confirm materially what has previously been suggested. The quantities of material deposited at Etton reveal that the enclosure was occupied only very intermittently and certainly less regularly than other contemporary sites in the region. The spatial distribution of material suggests that the enclosure ditch lay open for the entirety of the monument's life, but that acts of deposition generally focused on a specific part of the monument at any one time. As well as enhancing our knowledge of one particular causewayed enclosure, it is hoped that this paper – in combination with our earlier analysis of the pit site at Kilverstone – makes clear the potential that detailed material analysis has to offer in relation to our understanding of the temporality of occupation on prehistoric sites in general.
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
Channel estimation method is a key issue in MIMO system. In recent years, a lot of papers on subspace(SS)-based blind channel estimation have been published, and in this paper, combining SS method with a space-time coding scheme, we proposed a novel blind channel estimation method in MIMO system. Simulation result demonstrates the effectiveness of this method.
Resumo:
This paper introduces perspex algebra which is being developed as a common representation of geometrical knowledge. A perspex can currently be interpreted in one of four ways. First, the algebraic perspex is a generalization of matrices, it provides the most general representation for all of the interpretations of a perspex. The algebraic perspex can be used to describe arbitrary sets of coordinates. The remaining three interpretations of the perspex are all related to square matrices and operate in a Euclidean model of projective space-time, called perspex space. Perspex space differs from the usual Euclidean model of projective space in that it contains the point at nullity. It is argued that the point at nullity is necessary for a consistent account of perspective in top-down vision. Second, the geometric perspex is a simplex in perspex space. It can be used as a primitive building block for shapes, or as a way of recording landmarks on shapes. Third, the transformational perspex describes linear transformations in perspex space that provide the affine and perspective transformations in space-time. It can be used to match a prototype shape to an image, even in so called 'accidental' views where the depth of an object disappears from view, or an object stays in the same place across time. Fourth, the parametric perspex describes the geometric and transformational perspexes in terms of parameters that are related to everyday English descriptions. The parametric perspex can be used to obtain both continuous and categorical perception of objects. The paper ends with a discussion of issues related to using a perspex to describe logic.
Resumo:
Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We �rst show that by using a general functional decomposition for space-time dependent forcings, we can de�ne elementary susceptibilities that allow to construct the response of the system to general perturbations. Starting from the de�nition of SRB measure, we then study the consequence of taking di�erent sampling schemes for analysing the response of the system. We show that only a speci�c choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows to obtain the formula �rst presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be �ne-tuned to make the de�nition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analyzing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
Resumo:
The separate effects of ozone depleting substances (ODSs) and greenhouse gases (GHGs) on forcing circulation changes in the Southern Hemisphere extratropical troposphere are investigated using a version of the Canadian Middle Atmosphere Model (CMAM) that is coupled to an ocean. Circulation-related diagnostics include zonal wind, tropopause pressure, Hadley cell width, jet location, annular mode index, precipitation, wave drag, and eddy fluxes of momentum and heat. As expected, the tropospheric response to the ODS forcing occurs primarily in austral summer, with past (1960-99) and future (2000-99) trends of opposite sign, while the GHG forcing produces more seasonally uniform trends with the same sign in the past and future. In summer the ODS forcing dominates past trends in all diagnostics, while the two forcings contribute nearly equally but oppositely to future trends. The ODS forcing produces a past surface temperature response consisting of cooling over eastern Antarctica, and is the dominant driver of past summertime surface temperature changes when the model is constrained by observed sea surface temperatures. For all diagnostics, the response to the ODS and GHG forcings is additive: that is, the linear trend computed from the simulations using the combined forcings equals (within statistical uncertainty) the sum of the linear trends from the simulations using the two separate forcings. Space time spectra of eddy fluxes and the spatial distribution of transient wave drag are examined to assess the viability of several recently proposed mechanisms for the observed poleward shift in the tropospheric jet.
The Asian summer monsoon: an intercomparison of CMIP5 vs. CMIP3 simulations of the late 20th century
Resumo:
The boreal summer Asian monsoon has been evaluated in 25 Coupled Model Intercomparison Project-5 (CMIP5) and 22 CMIP3 GCM simulations of the late 20th Century. Diagnostics and skill metrics have been calculated to assess the time-mean, climatological annual cycle, interannual variability, and intraseasonal variability. Progress has been made in modeling these aspects of the monsoon, though there is no single model that best represents all of these aspects of the monsoon. The CMIP5 multi-model mean (MMM) is more skillful than the CMIP3 MMM for all diagnostics in terms of the skill of simulating pattern correlations with respect to observations. Additionally, for rainfall/convection the MMM outperforms the individual models for the time mean, the interannual variability of the East Asian monsoon, and intraseasonal variability. The pattern correlation of the time (pentad) of monsoon peak and withdrawal is better simulated than that of monsoon onset. The onset of the monsoon over India is typically too late in the models. The extension of the monsoon over eastern China, Korea, and Japan is underestimated, while it is overestimated over the subtropical western/central Pacific Ocean. The anti-correlation between anomalies of all-India rainfall and Niño-3.4 sea surface temperature is overly strong in CMIP3 and typically too weak in CMIP5. For both the ENSO-monsoon teleconnection and the East Asian zonal wind-rainfall teleconnection, the MMM interannual rainfall anomalies are weak compared to observations. Though simulation of intraseasonal variability remains problematic, several models show improved skill at representing the northward propagation of convection and the development of the tilted band of convection that extends from India to the equatorial west Pacific. The MMM also well represents the space-time evolution of intraseasonal outgoing longwave radiation anomalies. Caution is necessary when using GPCP and CMAP rainfall to validate (1) the time-mean rainfall, as there are systematic differences over ocean and land between these two data sets, and (2) the timing of monsoon withdrawal over India, where the smooth southward progression seen in India Meteorological Department data is better realized in CMAP data compared to GPCP data.
Resumo:
Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. This will damage some of the key properties of the space-time codes and can lead to substantial performance degradation. In this paper, we study the design of linear dispersion codes (LDCs) for such asynchronous cooperative communication networks. Firstly, the concept of conventional LDCs is extended to the delay-tolerant version and new design criteria are discussed. Then we propose a new design method to yield delay-tolerant LDCs that reach the optimal Jensen's upper bound on ergodic capacity as well as minimum average pairwise error probability. The proposed design employs stochastic gradient algorithm to approach a local optimum. Moreover, it is improved by using simulated annealing type optimization to increase the likelihood of the global optimum. The proposed method allows for flexible number of nodes, receive antennas, modulated symbols and flexible length of codewords. Simulation results confirm the performance of the newly-proposed delay-tolerant LDCs.