978 resultados para Optimal Linear Codes
Resumo:
PURPOSE: Currently, many pre-conditions are regarded as relative or absolute contraindications for lumbar total disc replacement (TDR). Radiculopathy is one among them. In Switzerland it is left to the surgeon's discretion when to operate if he adheres to a list of pre-defined indications. Contraindications, however, are less clearly specified. We hypothesized that, the extent of pre-operative radiculopathy results in different benefits for patients treated with mono-segmental lumbar TDR. We used patient perceived leg pain and its correlation with physician recorded radiculopathy for creating the patient groups to be compared. METHODS: The present study is based on the dataset of SWISSspine, a government mandated health technology assessment registry. Between March 2005 and April 2009, 577 patients underwent either mono- or bi-segmental lumbar TDR, which was documented in a prospective observational multicenter mode. A total of 416 cases with a mono-segmental procedure were included in the study. The data collection consisted of pre-operative and follow-up data (physician based) and clinical outcomes (NASS form, EQ-5D). A receiver operating characteristic (ROC) analysis was conducted with patients' self-indicated leg pain and the surgeon-based diagnosis "radiculopathy", as marked on the case report forms. As a result, patients were divided into two groups according to the severity of leg pain. The two groups were compared with regard to the pre-operative patient characteristics and pre- and post-operative pain on Visual Analogue Scale (VAS) and quality of life using general linear modeling. RESULTS: The optimal ROC model revealed a leg pain threshold of 40 ≤ VAS > 40 for the absence or the presence of "radiculopathy". Demographics in the resulting two groups were well comparable. Applying this threshold, the mean pre-operative leg pain level was 16.5 points in group 1 and 68.1 points in group 2 (p < 0.001). Back pain levels differed less with 63.6 points in group 1 and 72.6 in group 2 (p < 0.001). Pre-operative quality of life showed considerable differences with an 0.44 EQ-5D score in group 1 and 0.29 in group 2 (p < 0.001, possible score range -0.6 to 1). At a mean follow-up time of 8 months, group 1 showed a mean leg pain improvement of 3.6 points and group 2 of 41.1 points (p < 0.001). Back pain relief was 35.6 and 39.1 points, respectively (p = 0.27). EQ-5D score improvement was 0.27 in group 1 and 0.41 in group 2 (p = 0.11). CONCLUSIONS: Patients labeled as having radiculopathy (group 2) do mostly have pre-operative leg pain levels ≥ 40. Applying this threshold, the patients with pre-operative leg pain do also have more severe back pain and a considerably lower quality of life. Their net benefit from the lumbar TDR is higher and they do have similar post-operative back and leg pain levels as well as the quality of life as patients without pre-operative leg pain. Although randomized controlled trials are required to confirm these findings, they put leg pain and radiculopathy into perspective as absolute contraindications for TDR.
Resumo:
Optimal Usage of De-Icing Chemicals when Scraping Ice, Final Report of Project HR 391
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
In the context of fading channels it is well established that, with a constrained transmit power, the bit rates achievable by signals that are not peaky vanish as the bandwidth grows without bound. Stepping back from the limit, we characterize the highest bit rate achievable by such non-peaky signals and the approximate bandwidth where that apex occurs. As it turns out, the gap between the highest rate achievable without peakedness and the infinite-bandwidth capacity (with unconstrained peakedness) is small for virtually all settings of interest to wireless communications. Thus, although strictly achieving capacity in wideband fading channels does require signal peakedness, bit rates not far from capacity can be achieved with conventional signaling formats that do not exhibit the serious practical drawbacks associated with peakedness. In addition, we show that the asymptotic decay of bit rate in the absence of peakedness usually takes hold at bandwidths so large that wideband fading models are called into question. Rather, ultrawideband models ought to be used.
Resumo:
We focus on full-rate, fast-decodable space–time block codes (STBCs) for 2 x 2 and 4 x 2 multiple-input multiple-output (MIMO) transmission. We first derive conditions and design criteria for reduced-complexity maximum-likelihood (ML) decodable 2 x 2 STBCs, and we apply them to two families of codes that were recently discovered. Next, we derive a novel reduced-complexity 4 x 2 STBC, and show that it outperforms all previously known codes with certain constellations.
Resumo:
The 2×2 MIMO profiles included in Mobile WiMAX specifications are Alamouti’s space-time code (STC) fortransmit diversity and spatial multiplexing (SM). The former hasfull diversity and the latter has full rate, but neither of them hasboth of these desired features. An alternative 2×2 STC, which is both full rate and full diversity, is the Golden code. It is the best known 2×2 STC, but it has a high decoding complexity. Recently, the attention was turned to the decoder complexity, this issue wasincluded in the STC design criteria, and different STCs wereproposed. In this paper, we first present a full-rate full-diversity2×2 STC design leading to substantially lower complexity ofthe optimum detector compared to the Golden code with only a slight performance loss. We provide the general optimized form of this STC and show that this scheme achieves the diversitymultiplexing frontier for square QAM signal constellations. Then, we present a variant of the proposed STC, which provides a further decrease in the detection complexity with a rate reduction of 25% and show that this provides an interesting trade-off between the Alamouti scheme and SM.
Resumo:
We show how to build full-diversity product codes under both iterative encoding and decoding over non-ergodic channels, in presence of block erasure and block fading. The concept of a rootcheck or a root subcode is introduced by generalizing the same principle recently invented for low-density parity-check codes. We also describe some channel related graphical properties of the new family of product codes, a familyreferred to as root product codes.
Resumo:
Multiple-input multiple-output (MIMO) techniques have become an essential part of broadband wireless communications systems. For example, the recently developed IEEE 802.16e specifications for broadband wireless access include three MIMOprofiles employing 2×2 space-time codes (STCs), and two of these MIMO schemes are mandatory on the downlink of Mobile WiMAX systems. One of these has full rate, and the other has full diversity, but neither of them has both of the desired features. The third profile, namely, Matrix C, which is not mandatory, is both a full rate and a full diversity code, but it has a high decoder complexity. Recently, the attention was turned to the decodercomplexity issue and including this in the design criteria, several full-rate STCs were proposed as alternatives to Matrix C. In this paper, we review these different alternatives and compare them to Matrix C in terms of performances and the correspondingreceiver complexities.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
This paper derives approximations allowing the estimation of outage probability for standard irregular LDPC codes and full-diversity Root-LDPC codes used over nonergodic block-fading channels. Two separate approaches are discussed: a numerical approximation, obtained by curve fitting, for both code ensembles, and an analytical approximation for Root-LDPC codes, obtained under the assumption that the slope of the iterative threshold curve of a given code ensemble matches the slope of the outage capacity curve in the high-SNR regime.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
We study optimal public rationing of an indivisible good and private sector price responses. Consumers differ in their wealth and costs of provisions. Due to a limited budget, some consumers must be rationed. Public rationing determines the characteristics of consumers who seek supply from the private sector, where a firm sets prices based on consumers' cost information and in response to the rationing rule. We consider two information regimes. In the first, the public supplier rations consumers according to their wealth information. In equilibrium, the public supplier must ration both rich and poor consumers. Supplying all poor consumers would leave only rich consumers in the private market, and the firm would react by setting a high price. Rationing some poor consumers is optimal, and implements price reduction in the private market. In the second information regime, the public supplier rations consumers according to consumers' wealth and cost information. In equilibrium, consumers are allocated the good if and only if their costs are below a threshold. Wealth information is not used. Rationing based on cost results in higher equilibrium total consumer surplus than rationing based on wealth. [Authors]