953 resultados para Polynomial Automorphisms
Resumo:
Virtual colonoscopy (VC) is a minimally invasive means for identifying colorectal polyps and colorectal lesions by insufflating a patient’s bowel, applying contrast agent via rectal catheter, and performing multi-detector computed tomography (MDCT) scans. The technique is recommended for colonic health screening by the American Cancer Society but not funded by the Centers for Medicare and Medicaid Services (CMS) partially because of potential risks from radiation exposure. To date, no in‐vivo organ dose measurements have been performed for MDCT scans; thus, the accuracy of any current dose estimates is currently unknown. In this study, two TLDs were affixed to the inner lumen of standard rectal catheters used in VC, and in-vivo rectal dose measurements were obtained within 6 VC patients. In order to calculate rectal dose, TLD-100 powder response was characterized at diagnostic doses such that appropriate correction factors could be determined for VC. A third-order polynomial regression with a goodness of fit factor of R2=0.992 was constructed from this data. Rectal dose measurements were acquired with TLDs during simulated VC within a modified anthropomorphic phantom configured to represent three sizes of patients undergoing VC. The measured rectal doses decreased in an exponential manner with increasing phantom effective diameter, with R2=0.993 for the exponential regression model and a maximum percent coefficient of variation (%CoV) of 4.33%. In-vivo measurements yielded rectal doses ranged from that decreased exponentially with increasing patient effective diameter, in a manner that was also favorably predicted by the size specific dose estimate (SSDE) model for all VC patients that were of similar age, body composition, and TLD placement. The measured rectal dose within a younger patient was favorably predicted by the anthropomorphic phantom dose regression model due to similarities in the percentages of highly attenuating material at the respective measurement locations and in the placement of the TLDs. The in-vivo TLD response did not increase in %CoV with decreasing dose, and the largest %CoV was 10.0%.
Resumo:
The objects of study in this thesis are knots. More precisely, positive braid knots, which include algebraic knots and torus knots. In the first part of this thesis, we compare two classical knot invariants - the genus g and the signature σ - for positive braid knots. Our main result on positive braid knots establishes a linear lower bound for the signature in terms of the genus. In the second part of the thesis, a positive braid approach is applied to the study of the local behavior of polynomial functions from the complex affine plane to the complex numbers. After endowing polynomial function germs with a suitable topology, the adjacency problem arises: for a fixed germ f, what classes of germs g can be found arbitrarily close to f? We introduce two purely topological notions of adjacency for knots and discuss connections to algebraic notions of adjacency and the adjacency problem.
Resumo:
We define an applicative theory of truth TPT which proves totality exactly for the polynomial time computable functions. TPT has natural and simple axioms since nearly all its truth axioms are standard for truth theories over an applicative framework. The only exception is the axiom dealing with the word predicate. The truth predicate can only reflect elementhood in the words for terms that have smaller length than a given word. This makes it possible to achieve the very low proof-theoretic strength. Truth induction can be allowed without any constraints. For these reasons the system TPT has the high expressive power one expects from truth theories. It allows embeddings of feasible systems of explicit mathematics and bounded arithmetic. The proof that the theory TPT is feasible is not easy. It is not possible to apply a standard realisation approach. For this reason we develop a new realisation approach whose realisation functions work on directed acyclic graphs. In this way, we can express and manipulate realisation information more efficiently.
Resumo:
The sensitivity of the gas flow field to changes in different initial conditions has been studied for the case of a highly simplified cometary nucleus model. The nucleus model simulated a homogeneously outgassing sphere with a more active ring around an axis of symmetry. The varied initial conditions were the number density of the homogeneous region, the surface temperature, and the composition of the flow (varying amounts of H2O and CO2) from the active ring. The sensitivity analysis was performed using the Polynomial Chaos Expansion (PCE) method. Direct Simulation Monte Carlo (DSMC) was used for the flow, thereby allowing strong deviations from local thermal equilibrium. The PCE approach can be used to produce a sensitivity analysis with only four runs per modified input parameter and allows one to study and quantify non-linear responses of measurable parameters to linear changes in the input over a wide range. Hence the PCE allows one to obtain a functional relationship between the flow field properties at every point in the inner coma and the input conditions. It is for example shown that the velocity and the temperature of the background gas are not simply linear functions of the initial number density at the source. As probably expected, the main influence on the resulting flow field parameter is the corresponding initial parameter (i.e. the initial number density determines the background number density, the temperature of the surface determines the flow field temperature, etc.). However, the velocity of the flow field is also influenced by the surface temperature while the number density is not sensitive to the surface temperature at all in our model set-up. Another example is the change in the composition of the flow over the active area. Such changes can be seen in the velocity but again not in the number density. Although this study uses only a simple test case, we suggest that the approach, when applied to a real case in 3D, should assist in identifying the sensitivity of gas parameters measured in situ by, for example, the Rosetta spacecraft to the surface boundary conditions and vice versa.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
We present applicative theories of words corresponding to weak, and especially logarithmic, complexity classes. The theories for the logarithmic hierarchy and alternating logarithmic time formalise function algebras with concatenation recursion as main principle. We present two theories for logarithmic space where the first formalises a new two-sorted algebra which is very similar to Cook and Bellantoni's famous two-sorted algebra B for polynomial time [4]. The second theory describes logarithmic space by formalising concatenation- and sharply bounded recursion. All theories contain the predicates WW representing words, and VV representing temporary inaccessible words. They are inspired by Cantini's theories [6] formalising B.
Resumo:
In the last decades affine algebraic varieties and Stein manifolds with big (infinite-dimensional) automorphism groups have been intensively studied. Several notions expressing that the automorphisms group is big have been proposed. All of them imply that the manifold in question is an Oka–Forstnerič manifold. This important notion has also recently merged from the intensive studies around the homotopy principle in Complex Analysis. This homotopy principle, which goes back to the 1930s, has had an enormous impact on the development of the area of Several Complex Variables and the number of its applications is constantly growing. In this overview chapter we present three classes of properties: (1) density property, (2) flexibility, and (3) Oka–Forstnerič. For each class we give the relevant definitions, its most significant features and explain the known implications between all these properties. Many difficult mathematical problems could be solved by applying the developed theory, we indicate some of the most spectacular ones.
Resumo:
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both the correct associations among the observations, and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. Where S stands for the number of ’fences’ used in the problem, each fence consists of a set of observations that all originate from dierent targets. For a dimension of S ˃ the MTT problem becomes NP-hard. As of now no algorithm exists that can solve an NP-hard problem in an optimal manner within a reasonable (polynomial) computation time. However, there are algorithms that can approximate the solution with a realistic computational e ort. To this end an Elitist Genetic Algorithm is implemented to approximately solve the S ˃ MTT problem in an e cient manner. Its complexity is studied and it is found that an approximate solution can be obtained in a polynomial time. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to e ciently process large data sets with minimal manual intervention.
Resumo:
We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.
Resumo:
Steiner’s tube formula states that the volume of an ϵ-neighborhood of a smooth regular domain in Rn is a polynomial of degree n in the variable ϵ whose coefficients are curvature integrals (also called quermassintegrals). We prove a similar result in the sub-Riemannian setting of the first Heisenberg group. In contrast to the Euclidean setting, we find that the volume of an ϵ-neighborhood with respect to the Heisenberg metric is an analytic function of ϵ that is generally not a polynomial. The coefficients of the series expansion can be explicitly written in terms of integrals of iteratively defined canonical polynomials of just five curvature terms.
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
An introduction to Legendre polynomials as precursor to studying angular momentum in quantum chemistry,
Resumo:
The Frobenius solution to the differential equations associated with the harmonic oscillator (QM) is carried out in detail.
Resumo:
A single-issue spatial election is a voter preference profile derived from an arrangement of candidates and voters on a line, with each voter preferring the nearer of each pair of candidates. We provide a polynomial-time algorithm that determines whether a given preference profile is a single-issue spatial election and, if so, constructs such an election. This result also has preference representation and mechanism design applications.
Resumo:
The radial part of the Schrodinger Equation for the H-atom's electron involves Laguerre polynomials, hence this introduction.