918 resultados para symbolic computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Roberts, M. (2004). Sickles and Scythes revisited: harvest work, wages and symbolic meanings. In P. Lane; N. Raven; and K. Snell (Eds.), Women, Work and Wages in England, 1600-1850 (pp.68-101). Woodbridge: Boydell and Brewer. RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rendle, Matthew, 'The Symbolic Revolution: The Russian Nobility and February 1917', Revolutionary Russia (2005) 18(1) pp.23-46 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gohm, Rolf; Kummerer, B.; Lang, T., (2006) 'Non-commutative symbolic coding', Ergodic Theory and Dynamical Systems 26(5) pp.1521-1548 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attributing a dollar value to a keyword is an essential part of running any profitable search engine advertising campaign. When an advertiser has complete control over the interaction with and monetization of each user arriving on a given keyword, the value of that term can be accurately tracked. However, in many instances, the advertiser may monetize arrivals indirectly through one or more third parties. In such cases, it is typical for the third party to provide only coarse-grained reporting: rather than report each monetization event, users are aggregated into larger channels and the third party reports aggregate information such as total daily revenue for each channel. Examples of third parties that use channels include Amazon and Google AdSense. In such scenarios, the number of channels is generally much smaller than the number of keywords whose value per click (VPC) we wish to learn. However, the advertiser has flexibility as to how to assign keywords to channels over time. We introduce the channelization problem: how do we adaptively assign keywords to channels over the course of multiple days to quickly obtain accurate VPC estimates of all keywords? We relate this problem to classical results in weighing design, devise new adaptive algorithms for this problem, and quantify the performance of these algorithms experimentally. Our results demonstrate that adaptive weighing designs that exploit statistics of term frequency, variability in VPCs across keywords, and flexible channel assignments over time provide the best estimators of keyword VPCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation. This result is achieved by showing that the complexity class NQP of Adleman, Demarrais, and Huang, a quantum analog of NP, is equal to the counting class coC=P.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

British Petroleum (89A-1204); Defense Advanced Research Projects Agency (N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (F49620-92-J-0225)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user. © 2010 The American Physical Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The requirement for a very accurate dependence analysis to underpin software tools to aid the generation of efficient parallel implementations of scalar code is argued. The current status of dependence analysis is shown to be inadequate for the generation of efficient parallel code, causing too many conservative assumptions to be made. This paper summarises the limitations of conventional dependence analysis techniques, and then describes a series of extensions which enable the production of a much more accurate dependence graph. The extensions include analysis of symbolic variables, the development of a symbolic inequality disproof algorithm and its exploitation in a symbolic Banerjee inequality test; the use of inference engine proofs; the exploitation of exact dependence and dependence pre-domination attributes; interprocedural array analysis; conditional variable definition tracing; integer array tracing and division calculations. Analysis case studies on typical numerical code is shown to reduce the total dependencies estimated from conventional analysis by up to 50%. The techniques described in this paper have been embedded within a suite of tools, CAPTools, which combines analysis with user knowledge to produce efficient parallel implementations of numerical mesh based codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

FUELCON is an expert system in nuclear engineering. Its task is optimized refueling-design, which is crucial to keep down operation costs at a plant. FUELCON proposes sets of alternative configurations of fuel-allocation; the fuel is positioned in a grid representing the core of a reactor. The practitioner of in-core fuel management uses FUELCON to generate a reasonably good configuration for the situation at hand. The domain expert, on the other hand, resorts to the system to test heuristics and discover new ones, for the task described above. Expert use involves a manual phase of revising the ruleset, based on performance during previous iterations in the same session. This paper is concerned with a new phase: the design of a neural component to carry out the revision automatically. Such an automated revision considers previous performance of the system and uses it for adaptation and learning better rules. The neural component is based on a particular schema for a symbolic to recurrent-analogue bridge, called NIPPL, and on the reinforcement learning of neural networks for the adaptation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper describes the design of an efficient and robust genetic algorithm for the nuclear fuel loading problem (i.e., refuellings: the in-core fuel management problem) - a complex combinatorial, multimodal optimisation., Evolutionary computation as performed by FUELGEN replaces heuristic search of the kind performed by the FUELCON expert system (CAI 12/4), to solve the same problem. In contrast to the traditional genetic algorithm which makes strong requirements on the representation used and its parameter setting in order to be efficient, the results of recent research results on new, robust genetic algorithms show that representations unsuitable for the traditional genetic algorithm can still be used to good effect with little parameter adjustment. The representation presented here is a simple symbolic one with no linkage attributes, making the genetic algorithm particularly easy to apply to fuel loading problems with differing core structures and assembly inventories. A nonlinear fitness function has been constructed to direct the search efficiently in the presence of the many local optima that result from the constraint on solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the numerical solution of the linearized Euler equations, an optimized computational scheme is considered. It is based on fully staggered (in space and time) regular meshes and on a simple mirroring procedure at the stepwise solid walls. There is no need to define ghost points into the solid ohjects that reflect the sound waves. Test results demonstrate the accuracy of the method that may be used for aeroacoustic problems with complex geometries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.