2 resultados para Procedural Programming

em DigitalCommons@University of Nebraska - Lincoln


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The September l1th Victim Compensation Fund (the Fund) was created in response to the terrorist attacks of September 11, 2001. Much has been written about the Fund, both pro and con, in both popular media and scholarly literature. Perhaps the most widely used term in referring to the Fund is "unprecedented." The Fund is intriguing for many reasons, particularly for its public policy implications and its impact on the claimants themselves. The federal government has never before provided compensation to victims of terrorism through a special master who had virtually unlimited discretion in determining awards. Consequently, this formal allocation of money by a representative of the federal government to its citizens has provided an opportunity to test theories of procedural and distributive justice in a novel context. This article tests these theories by analyzing the results of a study of the Fund's claimants. Part I provides general background, summarizes existing commentary on the Fund, and discusses prior research on social justice that is relevant to the 9/11 claimants' experiences with the Fund. Part II of this article describes the methodology behind the study, in which seventy-one individuals who filed claims with the Fund completed surveys about their experiences with and perceptions of the Fund. Part III discusses the survey results. We found that participants were reasonably satisfied with the procedural aspects of the Fund, such as representatives' impartiality and respectful treatment. Participants were less satisfied, however, with the distributive aspects of the Fund, such as the unequal distribution of compensation and the reduction in compensation if claimants received compensation from other sources (e.g., life insurance). Part IV of this article addresses the implications of the study results for public policy and for theories of social justice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.