952 resultados para Chebyshev polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proof nets provide abstract counterparts to sequent proofs modulo rule permutations; the idea being that if two proofs have the same underlying proof-net, they are in essence the same proof. Providing a convincing proof-net counterpart to proofs in the classical sequent calculus is thus an important step in understanding classical sequent calculus proofs. By convincing, we mean that (a) there should be a canonical function from sequent proofs to proof nets, (b) it should be possible to check the correctness of a net in polynomial time, (c) every correct net should be obtainable from a sequent calculus proof, and (d) there should be a cut-elimination procedure which preserves correctness. Previous attempts to give proof-net-like objects for propositional classical logic have failed at least one of the above conditions. In Richard McKinley (2010) [22], the author presented a calculus of proof nets (expansion nets) satisfying (a) and (b); the paper defined a sequent calculus corresponding to expansion nets but gave no explicit demonstration of (c). That sequent calculus, called LK∗ in this paper, is a novel one-sided sequent calculus with both additively and multiplicatively formulated disjunction rules. In this paper (a self-contained extended version of Richard McKinley (2010) [22]), we give a full proof of (c) for expansion nets with respect to LK∗, and in addition give a cut-elimination procedure internal to expansion nets – this makes expansion nets the first notion of proof-net for classical logic satisfying all four criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expressing emotions has social functions; it provides information, affects social interactions, and shapes relationships with others. Expressing positive emotions could be a strategic tool for improving goal attainment during social interactions at work. Such effects have been found in research on social contagion, impression management, and emotion work. However, expressing emotions one does not feel entails the risk of being perceived as inauthentic. This risk may well be worth taking when the emotions felt are negative, as expressing negative emotions usually has negative effects. When experiencing positive emotions, however, expressing them authentically promises benefits, and the advantage of amplifying them is not so obvious. We postulated that expressing, and amplifying, positive emotions would foster goal attainment in social interactions at work, particularly when dealing with superiors. Analyses are based on 494 interactions involving the pursuit of a goal by 113 employes. Multilevel analyses, including polynomial analyses, show that authentic display of positive emotions supported goal attainment throughout. However, amplifying felt positive emotions promoted goal attainment only in interactions with superiors, but not with colleagues. Results are discussed with regard to the importance of hierarchy for detecting, and interpreting, signs of strategic display of positive emotions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intensity non-uniformity (bias field) correction, contextual constraints over spatial intensity distribution and non-spherical cluster's shape in the feature space are incorporated into the fuzzy c-means (FCM) for segmentation of three-dimensional multi-spectral MR images. The bias field is modeled by a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of either intensity or membership are added into the FCM cost functions. Since the feature space is not isotropic, distance measures, other than the Euclidean distance, are used to account for the shape and volumetric effects of clusters in the feature space. The performance of segmentation is improved by combining the adaptive FCM scheme with the criteria used in Gustafson-Kessel (G-K) and Gath-Geva (G-G) algorithms through the inclusion of the cluster scatter measure. The performance of this integrated approach is quantitatively evaluated on normal MR brain images using the similarity measures. The improvement in the quality of segmentation obtained with our method is also demonstrated by comparing our results with those produced by FSL (FMRIB Software Library), a software package that is commonly used for tissue classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual colonoscopy (VC) is a minimally invasive means for identifying colorectal polyps and colorectal lesions by insufflating a patient’s bowel, applying contrast agent via rectal catheter, and performing multi-detector computed tomography (MDCT) scans. The technique is recommended for colonic health screening by the American Cancer Society but not funded by the Centers for Medicare and Medicaid Services (CMS) partially because of potential risks from radiation exposure. To date, no in‐vivo organ dose measurements have been performed for MDCT scans; thus, the accuracy of any current dose estimates is currently unknown. In this study, two TLDs were affixed to the inner lumen of standard rectal catheters used in VC, and in-vivo rectal dose measurements were obtained within 6 VC patients. In order to calculate rectal dose, TLD-100 powder response was characterized at diagnostic doses such that appropriate correction factors could be determined for VC. A third-order polynomial regression with a goodness of fit factor of R2=0.992 was constructed from this data. Rectal dose measurements were acquired with TLDs during simulated VC within a modified anthropomorphic phantom configured to represent three sizes of patients undergoing VC. The measured rectal doses decreased in an exponential manner with increasing phantom effective diameter, with R2=0.993 for the exponential regression model and a maximum percent coefficient of variation (%CoV) of 4.33%. In-vivo measurements yielded rectal doses ranged from that decreased exponentially with increasing patient effective diameter, in a manner that was also favorably predicted by the size specific dose estimate (SSDE) model for all VC patients that were of similar age, body composition, and TLD placement. The measured rectal dose within a younger patient was favorably predicted by the anthropomorphic phantom dose regression model due to similarities in the percentages of highly attenuating material at the respective measurement locations and in the placement of the TLDs. The in-vivo TLD response did not increase in %CoV with decreasing dose, and the largest %CoV was 10.0%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objects of study in this thesis are knots. More precisely, positive braid knots, which include algebraic knots and torus knots. In the first part of this thesis, we compare two classical knot invariants - the genus g and the signature σ - for positive braid knots. Our main result on positive braid knots establishes a linear lower bound for the signature in terms of the genus. In the second part of the thesis, a positive braid approach is applied to the study of the local behavior of polynomial functions from the complex affine plane to the complex numbers. After endowing polynomial function germs with a suitable topology, the adjacency problem arises: for a fixed germ f, what classes of germs g can be found arbitrarily close to f? We introduce two purely topological notions of adjacency for knots and discuss connections to algebraic notions of adjacency and the adjacency problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We define an applicative theory of truth TPT which proves totality exactly for the polynomial time computable functions. TPT has natural and simple axioms since nearly all its truth axioms are standard for truth theories over an applicative framework. The only exception is the axiom dealing with the word predicate. The truth predicate can only reflect elementhood in the words for terms that have smaller length than a given word. This makes it possible to achieve the very low proof-theoretic strength. Truth induction can be allowed without any constraints. For these reasons the system TPT has the high expressive power one expects from truth theories. It allows embeddings of feasible systems of explicit mathematics and bounded arithmetic. The proof that the theory TPT is feasible is not easy. It is not possible to apply a standard realisation approach. For this reason we develop a new realisation approach whose realisation functions work on directed acyclic graphs. In this way, we can express and manipulate realisation information more efficiently.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present applicative theories of words corresponding to weak, and especially logarithmic, complexity classes. The theories for the logarithmic hierarchy and alternating logarithmic time formalise function algebras with concatenation recursion as main principle. We present two theories for logarithmic space where the first formalises a new two-sorted algebra which is very similar to Cook and Bellantoni's famous two-sorted algebra B for polynomial time [4]. The second theory describes logarithmic space by formalising concatenation- and sharply bounded recursion. All theories contain the predicates WW representing words, and VV representing temporary inaccessible words. They are inspired by Cantini's theories [6] formalising B.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both the correct associations among the observations, and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. Where S stands for the number of ’fences’ used in the problem, each fence consists of a set of observations that all originate from dierent targets. For a dimension of S ˃ the MTT problem becomes NP-hard. As of now no algorithm exists that can solve an NP-hard problem in an optimal manner within a reasonable (polynomial) computation time. However, there are algorithms that can approximate the solution with a realistic computational e ort. To this end an Elitist Genetic Algorithm is implemented to approximately solve the S ˃ MTT problem in an e cient manner. Its complexity is studied and it is found that an approximate solution can be obtained in a polynomial time. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to e ciently process large data sets with minimal manual intervention.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Steiner’s tube formula states that the volume of an ϵ-neighborhood of a smooth regular domain in Rn is a polynomial of degree n in the variable ϵ whose coefficients are curvature integrals (also called quermassintegrals). We prove a similar result in the sub-Riemannian setting of the first Heisenberg group. In contrast to the Euclidean setting, we find that the volume of an ϵ-neighborhood with respect to the Heisenberg metric is an analytic function of ϵ that is generally not a polynomial. The coefficients of the series expansion can be explicitly written in terms of integrals of iteratively defined canonical polynomials of just five curvature terms.