988 resultados para Delsarte-Mceliece Theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One very useful idea in AI research has been the notion of an explicit model of a problem situation. Procedural deduction languages, such as PLANNER, have been valuable tools for building these models. But PLANNER and its relatives are very limited in their ability to describe situations which are only partially specified. This thesis explores methods of increasing the ability of procedural deduction systems to deal with incomplete knowledge. The thesis examines in detail, problems involving negation, implication, disjunction, quantification, and equality. Control structure issues and the problem of modelling change under incomplete knowledge are also considered. Extensive comparisons are also made with systems for mechanica theorem proving.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence of laser-field parameters, such as intensity and pulse width, on the population of molecular excited state is investigated by using the time-dependent wavepacket method. For a two-state system in intense laser fields, the populations in the upper and lower states are given by the wavefunctions obtained by solving the Schrodinger equation through split-operator scheme. The calculation shows that both the laser intensity and the pulse width have a strong effect on the population in molecular excited state, and that as the common feature of light-matter interaction (LMI), the periodic changing of the population with the evolution time in each state can be interpreted by Rabi oscillation and area-theorem. The results illustrate that by controlling these two parameters, the needed population in excited state of interest can be obtained, which provides the foundation of light manipulation of molecular processes. (C) 2005 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The digital divide has been, at least until very recently, a major theme in policy as well as interdisciplinary academic circles across the world, as well as at a collective global level, as attested by the World Summit on the Information Society. Numerous research papers and volumes have attempted to conceptualise the digital divide and to offer reasoned prescriptive and normative responses. What has been lacking in many of these studies, it is submitted, is a rigorous negotiation of moral and political philosophy, the result being a failure to situate the digital divide - or rather, more widely, information imbalances - in a holistic understanding of social structures of power and wealth. In practice, prescriptive offerings have been little more than philanthropic in tendency, whether private or corporate philanthropy. Instead, a theory of distributive justice is required, one that recovers the tradition of emancipatory, democratic struggle. This much has been said before. What is new here, however, is that the paper suggests a specific formula, the Rawls-Tawney theorem, as a solution at the level of analytical moral-political philosophy. Building on the work of John Rawls and R. H. Tawney, this avoids both the Charybdis of Marxism and the Scylla of liberalism. It delineates some of the details of the meaning of social justice in the information age. Promulgating a conception of isonomia, which while egalitarian eschews arithmetic equality (the equality of misery), the paper hopes to contribute to the emerging ideal of communicative justice in the media-saturated, post-industrial epoch.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ridoux, O. and Ferr?, S. (2004) Introduction to logical information systems. Information Processing & Management, 40 (3), 383-419. Elsevier

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shen, Q., Zhao, R., Tang, W. (2008). Modelling random fuzzy renewal reward processes. IEEE Transactions on Fuzzy Systems, 16 (5),1379-1385

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Douglas, Robert; Cullen, M.J.P.; Roulston, I.; Sewell, M.J., (2005) 'Generalized semi-geostrophic theory on a sphere', Journal of Fluid Mechanics 531 pp.123-157 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wydział Matematyki i Informatyki

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this paper is to make an example which, first, illustrates Starret’s Spatial Imposibility Theorem, when agents have free mobility; and second, allowes us to get a competitive equilibrium with transportation when agents move only if there is a noticeable difference in utilities that justifies the change of location.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give an explicit and easy-to-verify characterization for subsets in finite total orders (infinitely many of them in general) to be uniformly definable by a first-order formula. From this characterization we derive immediately that Beth's definability theorem does not hold in any class of finite total orders, as well as that McColm's first conjecture is true for all classes of finite total orders. Another consequence is a natural 0-1 law for definable subsets on finite total orders expressed as a statement about the possible densities of first-order definable subsets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We generalize the well-known pebble game to infinite dag's, and we use this generalization to give new and shorter proofs of results in different areas of computer science (as diverse as "logic of programs" and "formal language theory"). Our applications here include a proof of a theorem due to Salomaa, asserting the existence of a context-free language with infinite index, and a proof of a theorem due to Tiuryn and Erimbetov, asserting that unbounded memory increases the power of logics of programs. The original proofs by Salomaa, Tiuryn, and Erimbetov, are fairly technical. The proofs by Tiuryn and Erimbetov also involve advanced techniques of model theory, namely, back-and-forth constructions based on a variant of Ehrenfeucht-Fraisse games. By contrast, our proofs are not only shorter, but also elementary. All we need is essentially finite induction and, in the case of the Tiuryn-Erimbetov result, the compactness and completeness of first-order logic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.