54 resultados para polynomial superinvariant
Resumo:
We present a novel approach to goal recognition based on a two-stage paradigm of graph construction and analysis. First, a graph structure called a Goal Graph is constructed to represent the observed actions, the state of the world, and the achieved goals as well as various connections between these nodes at consecutive time steps. Then, the Goal Graph is analysed at each time step to recognise those partially or fully achieved goals that are consistent with the actions observed so far. The Goal Graph analysis also reveals valid plans for the recognised goals or part of these goals. Our approach to goal recognition does not need a plan library. It does not suffer from the problems in the acquisition and hand-coding of large plan libraries, neither does it have the problems in searching the plan space of exponential size. We describe two algorithms for Goal Graph construction and analysis in this paradigm. These algorithms are both provably sound, polynomial-time, and polynomial-space. The number of goals recognised by our algorithms is usually very small after a sequence of observed actions has been processed. Thus the sequence of observed actions is well explained by the recognised goals with little ambiguity. We have evaluated these algorithms in the UNIX domain, in which excellent performance has been achieved in terms of accuracy, efficiency, and scalability.
Thickness-induced stabilization of ferroelectricity in SrRuO3/Ba0.5Sr0.5TiO3/Au thin film capacitors
Resumo:
Pulsed-laser deposition has been used to fabricate Au/Ba0.5Sr0.5TiO3/SrRuO3/MgO thin film capacitor structures. Crystallographic and microstructural investigations indicated that the Ba0.5Sr0.5TiO3 (BST) had grown epitaxially onto the SrRuO3 lower electrode, inducing in-plane compressive and out- of-plane tensile strain in the BST. The magnitude of strain developed increased systematically as film thickness decreased. At room temperature this composition of BST is paraelectric in bulk. However, polarization measurements suggested that strain had stabilized the ferroelectric state, and that the decrease in film thickness caused an increase in remanent polarization. An increase in the paraelectric-ferroelectric transition temperature upon a decrease in thickness was confirmed by dielectric measurements. Polarization loops were fitted to Landau-Ginzburg-Devonshire (LGD) polynomial expansion, from which a second order paraelectric-ferroelectric transition in the films was suggested at a thickness of similar to500 nm. Further, the LGD analysis showed that the observed changes in room temperature polarization were entirely consistent with strain coupling in the system. (C) 2002 American Institute of Physics.
Resumo:
This paper examines the determinants of unemployment duration in a competing risks framework with two destination states: inactivity and employment. The innovation is the recognition of defective risks. A polynomial hazard function is used to differentiate between two possible sources of infinite durations. The first is produced by a random process of unlucky draws, the second by workers rejecting a destination state. The evidence favors the mover-stayer model over the search model. Refinement of the former approach, using a more flexible baseline hazard function, produces a robust and more convincing explanation for positive and zero transition rates out of unemployment.
Resumo:
Slurries with high penetrability for production of Self-consolidating Slurry Infiltrated Fiber Concrete (SIFCON) were investigated in this study. Factorial experimental design was adopted in this investigation to assess the combined effects of five independent variables on mini-slump test, plate cohesion meter, induced bleeding test, J-fiber penetration test and compressive strength at 7 and 28 days. The independent variables investigated were the proportions of limestone powder (LSP) and sand, the dosages of superplasticiser (SP) and viscosity agent (VA), and water-to-binder ratio (w/b). A two-level fractional factorial statistical method was used to model the influence of key parameters on properties affecting the behaviour of fresh cement slurry and compressive strength. The models are valid for mixes with 10 to 50% LSP as replacement of cement, 0.02 to 0.06% VA by mass of cement, 0.6 to 1.2% SP and 50 to 150% sand (% mass of binder) and 0.42 to 0.48 w/b. The influences of LSP, SP, VA, sand and W/B were characterised and analysed using polynomial regression which identifies the primary factors and their interactions on the measured properties. Mathematical polynomials were developed for mini-slump, plate cohesion meter, J-fiber penetration test, induced bleeding and compressive strength as functions of LSP, SP, VA, sand and w/b. The estimated results of mini-slump, induced bleeding test and compressive strength from the derived models are compared with results obtained from previously proposed models that were developed for cement paste. The proposed response models of the self-consolidating SIFCON offer useful information regarding the mix optimization to secure a highly penetration of slurry with low compressive strength
Resumo:
In this paper the parameters of cement grout affecting rheological behaviour and compressive strength are investigated. Factorial experimental design was adopted in this investigation to assess the combined effects of the following factors on fluidity, rheological properties, induced bleeding and compressive strength: water/binder ratio (W/B), dosage of superplasticiser (SP), dosage of viscosity agent (VA), and proportion of limestone powder as replacement of cement (LSP). Mini-slump test, Marsh cone, Lombardi plate cohesion meter, induced bleeding test, coaxial rotating cylinder viscometer were used to evaluate the rheology of the cement grout and the compressive strengths at 7 and 28 days were measured. A two-level fractional factorial statistical model was used to model the influence of key parameters on properties affecting the fluidity, the rheology and compressive strength. The models are valid for mixes with 0.35-0.42 W/B, 0.3-1.2% SP, 0.02-0.7% VA (percentage of binder) and 12-45% LSP as replacement of cement. The influences of W/B, SP, VA and LSP were characterised and analysed using polynomial regression which can identify the primary factors and their interactions on the measured properties. Mathematical polynomials were developed for mini-slump, plate cohesion meter, inducing bleeding, yield value, plastic viscosity and compressive strength as function of W/B, SP, VA and proportion of LSP. The statistical approach used highlighted the limestone powder effect and the dosage of SP and VA on the various rheological characteristics of cement grout
Resumo:
Non-ideal behaviour of 1-butyl-3-methylimidazolium hexafluorophosphate [bmim][PF6] in ethylene glycol monomethyl ether; CH3OCH2CH2OH (EGMME), ethylene glycol dimethyl ether; CH3OCH2CH2OCH3 (EGDME) and diethylene glycol dimethyl ether; CH3(OCH2CH2)2OCH3 (DEGDME) have been investigated over the whole composition range at T = (298.15 to 318.15) K. To gain insight into the mixing behaviour, results of density measurements were used to estimate excess molar volumes, image, apparent molar volumes, Vphi,i, partial molar volumes, image, excess partial molar volumes, image, and their limiting values at infinite dilution, image, image, and image, respectively. Volumetric results have been analyzed in the light of Prigogine–Flory–Patterson (PFP) statistical mechanical theory. Measurements of refractive indices n were also performed for all the binary mixtures over whole composition range at T = 298.15 K. Deviations in refractive indices ?phin and the deviation of molar refraction ?xR have been calculated from experimental data. Refractive indices results have been correlated with volumetric results and have been interpreted in terms of molecular interactions. Excess properties are fitted to the Redlich–Kister polynomial equation to obtain the binary coefficients and the standard errors.
Resumo:
In this paper a novel scalable public-key processor architecture is presented that supports modular exponentiation and Elliptic Curve Cryptography over both prime GF(p) and binary GF(2) extension fields. This is achieved by a high performance instruction set that provides a comprehensive range of integer and polynomial basis field arithmetic. The instruction set and associated hardware are generic in nature and do not specifically support any cryptographic algorithms or protocols. Firmware within the device is used to efficiently implement complex and data intensive arithmetic. A firmware library has been developed in order to demonstrate support for numerous exponentiation and ECC approaches, such as different coordinate systems and integer recoding methods. The processor has been developed as a high-performance asymmetric cryptography platform in the form of a scalable Verilog RTL core. Various features of the processor may be scaled, such as the pipeline width and local memory subsystem, in order to suit area, speed and power requirements. The processor is evaluated and compares favourably with previous work in terms of performance while offering an unparalleled degree of flexibility. © 2006 IEEE.
Resumo:
The design and implementation of a programmable cyclic redundancy check (CRC) computation circuit architecture, suitable for deployment in network related system-on-chips (SoCs) is presented. The architecture has been designed to be field reprogrammable so that it is fully flexible in terms of the polynomial deployed and the input port width. The circuit includes an embedded configuration controller that has a low reconfiguration time and hardware cost. The circuit has been synthesised and mapped to 130-nm UMC standard cell [application-specific integrated circuit (ASIC)] technology and is capable of supporting line speeds of 5 Gb/s. © 2006 IEEE.
Resumo:
We propose a simple and flexible framework for forecasting the joint density of asset returns. The multinormal distribution is augmented with a polynomial in (time-varying) non-central co-moments of assets. We estimate the coefficients of the polynomial via the Method of Moments for a carefully selected set of co-moments. In an extensive empirical study, we compare the proposed model with a range of other models widely used in the literature. Employing a recently proposed as well as standard techniques to evaluate multivariate forecasts, we conclude that the augmented joint density provides highly accurate forecasts of the “negative tail” of the joint distribution.
Resumo:
In this paper we define the structural information content of graphs as their corresponding graph entropy. This definition is based on local vertex functionals obtained by calculating-spheres via the algorithm of Dijkstra. We prove that the graph entropy and, hence, the local vertex functionals can be computed with polynomial time complexity enabling the application of our measure for large graphs. In this paper we present numerical results for the graph entropy of chemical graphs and discuss resulting properties. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The classification of protein structures is an important and still outstanding problem. The purpose of this paper is threefold. First, we utilize a relation between the Tutte and homfly polynomial to show that the Alexander-Conway polynomial can be algorithmically computed for a given planar graph. Second, as special cases of planar graphs, we use polymer graphs of protein structures. More precisely, we use three building blocks of the three-dimensional protein structure-alpha-helix, antiparallel beta-sheet, and parallel beta-sheet-and calculate, for their corresponding polymer graphs, the Tutte polynomials analytically by providing recurrence equations for all three secondary structure elements. Third, we present numerical results comparing the results from our analytical calculations with the numerical results of our algorithm-not only to test consistency, but also to demonstrate that all assigned polynomials are unique labels of the secondary structure elements. This paves the way for an automatic classification of protein structures.
Resumo:
We present and analyze an algorithm to measure the structural similarity of generalized trees, a new graph class which includes rooted trees. For this, we represent structural properties of graphs as strings and define the similarity of two Graphs as optimal alignments of the corresponding property stings. We prove that the obtained graph similarity measures are so called Backward similarity measures. From this we find that the time complexity of our algorithm is polynomial and, hence, significantly better than the time complexity of classical graph similarity methods based on isomorphic relations. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Measuring the structural similarity of graphs is a challenging and outstanding problem. Most of the classical approaches of the so-called exact graph matching methods are based on graph or subgraph isomorphic relations of the underlying graphs. In contrast to these methods in this paper we introduce a novel approach to measure the structural similarity of directed and undirected graphs that is mainly based on margins of feature vectors representing graphs. We introduce novel graph similarity and dissimilarity measures, provide some properties and analyze their algorithmic complexity. We find that the computational complexity of our measures is polynomial in the graph size and, hence, significantly better than classical methods from, e.g. exact graph matching which are NP-complete. Numerically, we provide some examples of our measure and compare the results with the well-known graph edit distance. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Suppose C is a bounded chain complex of finitely generated free modules over the Laurent polynomial ring L = R[x,x -1]. Then C is R-finitely dominated, i.e. homotopy equivalent over R to a bounded chain complex of finitely generated projective R-modules if and only if the two chain complexes C ? L R((x)) and C ? L R((x -1)) are acyclic, as has been proved by Ranicki (A. Ranicki, Finite domination and Novikov rings, Topology 34(3) (1995), 619–632). Here R((x)) = R[[x]][x -1] and R((x -1)) = R[[x -1]][x] are rings of the formal Laurent series, also known as Novikov rings. In this paper, we prove a generalisation of this criterion which allows us to detect finite domination of bounded below chain complexes of projective modules over Laurent rings in several indeterminates.
Resumo:
A new type of advanced encryption standard (AES) implementation using a normal basis is presented. The method is based on a lookup technique that makes use of inversion and shift registers, which leads to a smaller size of lookup for the S-box than its corresponding implementations. The reduction in the lookup size is based on grouping sets of inverses into conjugate sets which in turn leads to a reduction in the number of lookup values. The above technique is implemented in a regular AES architecture using register files, which requires less interconnect and area and is suitable for security applications. The results of the implementation are competitive in throughput and area compared with the corresponding solutions in a polynomial basis.