959 resultados para Orthogonal projections
Resumo:
We study the empirical measure LA of the eigenvalues of nonnormal square matrices of the form A(n) = U(n)T(n)V(n), with U(n), V(n) independent Haar distributed on the unitary group and T(n) diagonal. We show that when the empirical measure of the eigenyalues of T(n) converges, and T(n) satisfies some technical conditions, L(An) converges towards a rotationally invariant measure mu on the complex plane whose support is a single ring. In particular, we provide a complete proof of the Feinberg-Zee single ring theorem [6]. We also consider the case where U(n), V(n) are independently Haar distributed on the orthogonal group.
Resumo:
Distributed space-time block codes (DSTBCs) from complex orthogonal designs (CODs) (both square and nonsquare), coordinate interleaved orthogonal designs (CIODs), and Clifford unitary weight designs (CUWDs) are known to lose their single-symbol ML decodable (SSD) property when used in two-hop wireless relay networks using amplify and forward protocol. For such networks, in this paper, three new classes of high rate, training-symbol embedded (TSE) SSD DSTBCs are constructed: TSE-CODs, TSE-CIODs, and TSE-CUWDs. The proposed codes include the training symbols inside the structure of the code which is shown to be the key point to obtain the SSD property along with the channel estimation capability. TSE-CODs are shown to offer full-diversity for arbitrary complex constellations and the constellations for which TSE-CIODs and TSE-CUWDs offer full-diversity are characterized. It is shown that DSTBCs from nonsquare TSE-CODs provide better rates (in symbols per channel use) when compared to the known SSD DSTBCs for relay networks. Important from the practical point of view, the proposed DSTBCs do not contain any zeros in their codewords and as a result, antennas of the relay nodes do not undergo a sequence of switch on/off transitions within every codeword, and, thus, avoid the antenna switching problem.
Resumo:
To a reasonable approximation, a secondary structures of RNA is determined by Watson-Crick pairing without pseudo-knots in such a way as to minimise the number of unpaired bases: We show that this minimal number is determined by the maximal conjugacy-invariant pseudo-norm on the free group on two generators subject to bounds on the generators. This allows us to construct lower bounds on the minimal number of unpaired bases by constructing conjugacy invariant pseudo-norms. We show that one such construction, based on isometric actions on metric spaces, gives a sharp lower bound. A major goal here is to formulate a purely mathematical question, based on considering orthogonal representations, which we believe is of some interest independent of its biological roots.
Resumo:
A Space-Time Block Code (STBC) in K symbols (variables) is called g-group decodable STBC if its maximum-likelihood decoding metric can be written as a sum of g terms such that each term is a function of a subset of the K variables and each variable appears in only one term. In this paper we provide a general structure of the weight matrices of multi-group decodable codes using Clifford algebras. Without assuming that the number of variables in each group to be the same, a method of explicitly constructing the weight matrices of full-diversity, delay-optimal g-group decodable codes is presented for arbitrary number of antennas. For the special case of Nt=2a we construct two subclass of codes: (i) A class of 2a-group decodable codes with rate a2(a−1), which is, equivalently, a class of Single-Symbol Decodable codes, (ii) A class of (2a−2)-group decodable with rate (a−1)2(a−2), i.e., a class of Double-Symbol Decodable codes. Simulation results show that the DSD codes of this paper perform better than previously known Quasi-Orthogonal Designs.
Resumo:
An overview of space-time code construction based on cyclic division algebras (CDA) is presented. Applications of such space-time codes to the construction of codes optimal under the diversity-multiplexing gain (D-MG) tradeoff, to the construction of the so-called perfect space-time codes, to the construction of optimal space-time codes for the ARQ channel as well as to the construction of codes optimal for the cooperative relay network channel are discussed. We also present a construction of optimal codes based on CDA for a class of orthogonal amplify and forward (OAF) protocols for the cooperative relay network
Resumo:
The cyclic difference sets constructed by Singer are also examples of perfect distinct difference sets (DDS). The Bose construction of distinct difference sets, leads to a relative difference set. In this paper we introduce the concept of partial relative DDS and prove that an optical orthogonal code (OOC) construction due to Moreno et. al., is a partial relative DDS. We generalize the concept of ideal matrices previously introduced by Kumar and relate it to the concepts of this paper. Another variation of ideal matrices is introduced in this paper: Welch ideal matrices of dimension n by (n - 1). We prove that Welch ideal matrices exist only for n prime. Finally, we recast an old conjecture of Golomb on the Welch construction of Costas arrays using the concepts of this paper. This connection suggests that our construction of partial relative difference sets is in a sense, unique
Resumo:
In this work, we construct a unified family of cooperative diversity coding schemes for implementing the orthogonal amplify-and-forward and the orthogonal selection-decode-and-forward strategies in cooperative wireless networks. We show that, as the number of users increases, these schemes meet the corresponding optimal high-SNR outage region, and do so with minimal order of signaling complexity. This is an improvement over all outage-optimal schemes which impose exponential increases in signaling complexity for every new network user. Our schemes, which are based on commutative algebras of normal matrices, satisfy the outage-related information theoretic criteria, the duplex-related coding criteria, and maintain reduced signaling, encoding and decoding complexities
Resumo:
With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
In this work, using 3-D device simulation, we perform an extensive gate to source/drain underlap optimization for the recently proposed hybrid transistor, HFinFET, to show that the underlap lengths can be suitably tuned to improve the ON-OFF ratio as well as the subthreshold characteristics in an ultrashort channel n-type device without significantON performance degradation. We also show that the underlap knob can be tuned to mitigate the device quality degradation in presence of interface traps. The obtained results are shown to be promising when compared against ITRS 2009 performance projections, as well as published state of the art planar and nonplanar Silicon MOSFET data of comparable gate lengths using standard benchmarking techniques.
Resumo:
The Effective Exponential SNR Mapping (EESM) is an indispensable tool for analyzing and simulating next generation orthogonal frequency division multiplexing (OFDM) based wireless systems. It converts the different gains of multiple subchannels, over which a codeword is transmitted, into a single effective flat-fading gain with the same codeword error rate. It facilitates link adaptation by helping each user to compute an accurate channel quality indicator (CQI), which is fed back to the base station to enable downlink rate adaptation and scheduling. However, the highly non-linear nature of EESM makes a performance analysis of adaptation and scheduling difficult; even the probability distribution of EESM is not known in closed-form. This paper shows that EESM can be accurately modeled as a lognormal random variable when the subchannel gains are Rayleigh distributed. The model is also valid when the subchannel gains are correlated in frequency or space. With some simplifying assumptions, the paper then develops a novel analysis of the performance of LTE's two CQI feedback schemes that use EESM to generate CQI. The comprehensive model and analysis quantify the joint effect of several critical components such as scheduler, multiple antenna mode, CQI feedback scheme, and EESM-based feedback averaging on the overall system throughput.
Resumo:
Frequency-domain scheduling and rate adaptation enable next-generation orthogonal frequency-division multiple access (OFDMA) cellular systems such as Long-Term Evolution (LTE) to achieve significantly higher spectral efficiencies. LTE uses a pragmatic combination of several techniques to reduce the channel-state feedback that is required by a frequency-domain scheduler. In the subband-level feedback and user-selected subband feedback schemes specified in LTE, the user reduces feedback by reporting only the channel quality that is averaged over groups of resource blocks called subbands. This approach leads to an occasional incorrect determination of rate by the scheduler for some resource blocks. In this paper, we develop closed-form expressions for the throughput achieved by the feedback schemes of LTE. The analysis quantifies the joint effects of three critical components on the overall system throughput-scheduler, multiple-antenna mode, and the feedback scheme-and brings out its dependence on system parameters such as the number of resource blocks per subband and the rate adaptation thresholds. The effect of the coarse subband-level frequency granularity of feedback is captured. The analysis provides an independent theoretical reference and a quick system parameter optimization tool to an LTE system designer and theoretically helps in understanding the behavior of OFDMA feedback reduction techniques when operated under practical system constraints.
Resumo:
A technique is proposed for classifying respiratory volume waveforms(RVW) into normal and abnormal categories of respiratory pathways. The proposed method transforms the temporal sequence into frequency domain by using an orthogonal transform, namely discrete cosine transform (DCT) and the transformed signal is pole-zero modelled. A Bayes classifier using model pole angles as the feature vector performed satisfactorily when a limited number of RVWs recorded under deep and rapid (DR) manoeuvre are classified.
Resumo:
This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.
Resumo:
It is well known that the space-time block codes (STBCs) from complex orthogonal designs (CODs) are single-symbol decodable/symbol-by-symbol decodable (SSD). The weight matrices of the square CODs are all unitary and obtainable from the unitary matrix representations of Clifford Algebras when the number of transmit antennas n is a power of 2. The rate of the square CODs for n = 2(a) has been shown to be a+1/2(a) complex symbols per channel use. However, SSD codes having unitary-weight matrices need not be CODs, an example being the minimum-decoding-complexity STBCs from quasi-orthogonal designs. In this paper, an achievable upper bound on the rate of any unitary-weight SSD code is derived to be a/2(a)-1 complex symbols per channel use for 2(a) antennas, and this upper bound is larger than that of the CODs. By way of code construction, the interrelationship between the weight matrices of unitary-weight SSD codes is studied. Also, the coding gain of all unitary-weight SSD codes is proved to be the same for QAM constellations and conditions that are necessary for unitary-weight SSD codes to achieve full transmit diversity and optimum coding gain are presented.