953 resultados para Polynomial Automorphisms
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
We derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. An important example covered within this framework is to test for local monotonicity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. For multiscale testing, we consider a calibration, motivated by the modulus of continuity of Brownian motion. We investigate the performance of our results from both the theoretical and simulation based point of view. A major consequence of our work is that the detection of qualitative features of a density in a deconvolution problem is a doable task, although the minimax rates for pointwise estimation are very slow.
Resumo:
We construct holomorphic families of proper holomorphic embeddings of \mathbb {C}^{k} into \mathbb {C}^{n} (0\textless k\textless n-1), so that for any two different parameters in the family, no holomorphic automorphism of \mathbb {C}^{n} can map the image of the corresponding two embeddings onto each other. As an application to the study of the group of holomorphic automorphisms of \mathbb {C}^{n}, we derive the existence of families of holomorphic \mathbb {C}^{*}-actions on \mathbb {C}^{n} (n\ge5) so that different actions in the family are not conjugate. This result is surprising in view of the long-standing holomorphic linearization problem, which, in particular, asked whether there would be more than one conjugacy class of \mathbb {C}^{*}-actions on \mathbb {C}^{n} (with prescribed linear part at a fixed point).
Resumo:
In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.
Resumo:
In this note we survey recent results on automorphisms of affine algebraic varieties, infinitely transitive group actions and flexibility. We present related constructions and examples, and discuss geometric applications and open problems.
Resumo:
We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.
Resumo:
The goal of this paper is to establish exponential convergence of $hp$-version interior penalty (IP) discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems with homogeneous Dirichlet boundary conditions and piecewise analytic data in three-dimensional polyhedral domains. More precisely, we shall analyze the convergence of the $hp$-IP dG methods considered in [D. Schötzau, C. Schwab, T. P. Wihler, SIAM J. Numer. Anal., 51 (2013), pp. 1610--1633] based on axiparallel $\sigma$-geometric anisotropic meshes and $\bm{s}$-linear anisotropic polynomial degree distributions.
Resumo:
A new anisotropic elastic-viscoplastic damage constitutive model for bone is proposed using an eccentric elliptical yield criterion and nonlinear isotropic hardening. A micromechanics-based multiscale homogenization scheme proposed by Reisinger et al. is used to obtain the effective elastic properties of lamellar bone. The dissipative process in bone is modeled as viscoplastic deformation coupled to damage. The model is based on an orthotropic ecuntric elliptical criterion in stress space. In order to simplify material identification, an eccentric elliptical isotropic yield surface was defined in strain space, which is transformed to a stress-based criterion by means of the damaged compliance tensor. Viscoplasticity is implemented by means of the continuous Perzyna formulation. Damage is modeled by a scalar function of the accumulated plastic strain D(κ) , reducing all element s of the stiffness matrix. A polynomial flow rule is proposed in order to capture the rate-dependent post-yield behavior of lamellar bone. A numerical algorithm to perform the back projection on the rate-dependent yield surface has been developed and implemented in the commercial finite element solver Abaqus/Standard as a user subroutine UMAT. A consistent tangent operator has been derived and implemented in order to ensure quadratic convergence. Correct implementation of the algorithm, convergence, and accuracy of the tangent operator was tested by means of strain- and stress-based single element tests. A finite element simulation of nano- indentation in lamellar bone was finally performed in order to show the abilities of the newly developed constitutive model.
Resumo:
Proof nets provide abstract counterparts to sequent proofs modulo rule permutations; the idea being that if two proofs have the same underlying proof-net, they are in essence the same proof. Providing a convincing proof-net counterpart to proofs in the classical sequent calculus is thus an important step in understanding classical sequent calculus proofs. By convincing, we mean that (a) there should be a canonical function from sequent proofs to proof nets, (b) it should be possible to check the correctness of a net in polynomial time, (c) every correct net should be obtainable from a sequent calculus proof, and (d) there should be a cut-elimination procedure which preserves correctness. Previous attempts to give proof-net-like objects for propositional classical logic have failed at least one of the above conditions. In Richard McKinley (2010) [22], the author presented a calculus of proof nets (expansion nets) satisfying (a) and (b); the paper defined a sequent calculus corresponding to expansion nets but gave no explicit demonstration of (c). That sequent calculus, called LK∗ in this paper, is a novel one-sided sequent calculus with both additively and multiplicatively formulated disjunction rules. In this paper (a self-contained extended version of Richard McKinley (2010) [22]), we give a full proof of (c) for expansion nets with respect to LK∗, and in addition give a cut-elimination procedure internal to expansion nets – this makes expansion nets the first notion of proof-net for classical logic satisfying all four criteria.
Resumo:
We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.
Resumo:
OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.
Resumo:
An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.
Resumo:
Expressing emotions has social functions; it provides information, affects social interactions, and shapes relationships with others. Expressing positive emotions could be a strategic tool for improving goal attainment during social interactions at work. Such effects have been found in research on social contagion, impression management, and emotion work. However, expressing emotions one does not feel entails the risk of being perceived as inauthentic. This risk may well be worth taking when the emotions felt are negative, as expressing negative emotions usually has negative effects. When experiencing positive emotions, however, expressing them authentically promises benefits, and the advantage of amplifying them is not so obvious. We postulated that expressing, and amplifying, positive emotions would foster goal attainment in social interactions at work, particularly when dealing with superiors. Analyses are based on 494 interactions involving the pursuit of a goal by 113 employes. Multilevel analyses, including polynomial analyses, show that authentic display of positive emotions supported goal attainment throughout. However, amplifying felt positive emotions promoted goal attainment only in interactions with superiors, but not with colleagues. Results are discussed with regard to the importance of hierarchy for detecting, and interpreting, signs of strategic display of positive emotions.
Resumo:
Intensity non-uniformity (bias field) correction, contextual constraints over spatial intensity distribution and non-spherical cluster's shape in the feature space are incorporated into the fuzzy c-means (FCM) for segmentation of three-dimensional multi-spectral MR images. The bias field is modeled by a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of either intensity or membership are added into the FCM cost functions. Since the feature space is not isotropic, distance measures, other than the Euclidean distance, are used to account for the shape and volumetric effects of clusters in the feature space. The performance of segmentation is improved by combining the adaptive FCM scheme with the criteria used in Gustafson-Kessel (G-K) and Gath-Geva (G-G) algorithms through the inclusion of the cluster scatter measure. The performance of this integrated approach is quantitatively evaluated on normal MR brain images using the similarity measures. The improvement in the quality of segmentation obtained with our method is also demonstrated by comparing our results with those produced by FSL (FMRIB Software Library), a software package that is commonly used for tissue classification.