996 resultados para Trigonometric moment problem
Resumo:
We derive expressions for convolution multiplication properties of discrete cosine transform II (DCT II) starting from equivalent discrete Fourier transform (DFT) representations. Using these expressions, a method for implementing linear filtering through block convolution in the DCT II domain is presented. For the case of nonsymmetric impulse response, additional discrete sine transform II (DST II) is required for implementing the filter in DCT II domain, where as for a symmetric impulse response, the additional transform is not required. Comparison with recently proposed circular convolution technique in DCT II domain shows that the proposed new method is computationally more efficient.
Resumo:
Evidence-based policy is a means of ensuring that policy is informed by more than ideology or expedience. However, what constitutes robust evidence is highly contested. In this paper, we argue policy must draw on quantitative and qualitative data. We do this in relation to a long entrenched problem in Australian early childhood education and care (ECEC) workforce policy. A critical shortage of qualified staff threatens the attainment of broader child and family policy objectives linked to the provision of ECEC and has not been successfully addressed by initiatives to date. We establish some of the limitations of existing quantitative data sets and consider the potential of qualitative studies to inform ECEC workforce policy. The adoption of both quantitative and qualitative methods is needed to illuminate the complex nature of the work undertaken by early childhood educators, as well as the environmental factors that sustain job satisfaction in a demanding and poorly understood working environment.
Resumo:
This paper describes an approach based on Zernike moments and Delaunay triangulation for localization of hand-written text in machine printed text documents. The Zernike moments of the image are first evaluated and we classify the text as hand-written using the nearest neighbor classifier. These features are independent of size, slant, orientation, translation and other variations in handwritten text. We then use Delaunay triangulation to reclassify the misclassified text regions. When imposing Delaunay triangulation on the centroid points of the connected components, we extract features based on the triangles and reclassify the text. We remove the noise components in the document as part of the preprocessing step so this method works well on noisy documents. The success rate of the method is found to be 86%. Also for specific hand-written elements such as signatures or similar text the accuracy is found to be even higher at 93%.
Resumo:
We discuss a technique for solving the Landau-Zener (LZ) problem of finding the probability of excitation in a two-level system. The idea of time reversal for the Schrodinger equation is employed to obtain the state reached at the final time and hence the excitation probability. Using this method, which can reproduce the well-known expression for the LZ transition probability, we solve a variant of the LZ problem, which involves waiting at the minimum gap for a time t(w); we find an exact expression for the excitation probability as a function of t(w). We provide numerical results to support our analytical expressions. We then discuss the problem of waiting at the quantum critical point of a many-body system and calculate the residual energy generated by the time-dependent Hamiltonian. Finally, we discuss possible experimental realizations of this work.
Resumo:
Let G = (V,E) be a simple, finite, undirected graph. For S ⊆ V, let $\delta(S,G) = \{ (u,v) \in E : u \in S \mbox { and } v \in V-S \}$ and $\phi(S,G) = \{ v \in V -S: \exists u \in S$ , such that (u,v) ∈ E} be the edge and vertex boundary of S, respectively. Given an integer i, 1 ≤ i ≤ ∣ V ∣, the edge and vertex isoperimetric value at i is defined as b e (i,G) = min S ⊆ V; |S| = i |δ(S,G)| and b v (i,G) = min S ⊆ V; |S| = i |φ(S,G)|, respectively. The edge (vertex) isoperimetric problem is to determine the value of b e (i, G) (b v (i, G)) for each i, 1 ≤ i ≤ |V|. If we have the further restriction that the set S should induce a connected subgraph of G, then the corresponding variation of the isoperimetric problem is known as the connected isoperimetric problem. The connected edge (vertex) isoperimetric values are defined in a corresponding way. It turns out that the connected edge isoperimetric and the connected vertex isoperimetric values are equal at each i, 1 ≤ i ≤ |V|, if G is a tree. Therefore we use the notation b c (i, T) to denote the connected edge (vertex) isoperimetric value of T at i. Hofstadter had introduced the interesting concept of meta-fibonacci sequences in his famous book “Gödel, Escher, Bach. An Eternal Golden Braid”. The sequence he introduced is known as the Hofstadter sequences and most of the problems he raised regarding this sequence is still open. Since then mathematicians studied many other closely related meta-fibonacci sequences such as Tanny sequences, Conway sequences, Conolly sequences etc. Let T 2 be an infinite complete binary tree. In this paper we related the connected isoperimetric problem on T 2 with the Tanny sequences which is defined by the recurrence relation a(i) = a(i − 1 − a(i − 1)) + a(i − 2 − a(i − 2)), a(0) = a(1) = a(2) = 1. In particular, we show that b c (i, T 2) = i + 2 − 2a(i), for each i ≥ 1. We also propose efficient polynomial time algorithms to find vertex isoperimetric values at i of bounded pathwidth and bounded treewidth graphs.
Resumo:
A Finite Element Method based forward solver is developed for solving the forward problem of a 2D-Electrical Impedance Tomography. The Method of Weighted Residual technique with a Galerkin approach is used for the FEM formulation of EIT forward problem. The algorithm is written in MatLAB7.0 and the forward problem is studied with a practical biological phantom developed. EIT governing equation is numerically solved to calculate the surface potentials at the phantom boundary for a uniform conductivity. An EIT-phantom is developed with an array of 16 electrodes placed on the inner surface of the phantom tank filled with KCl solution. A sinusoidal current is injected through the current electrodes and the differential potentials across the voltage electrodes are measured. Measured data is compared with the differential potential calculated for known current and solution conductivity. Comparing measured voltage with the calculated data it is attempted to find the sources of errors to improve data quality for better image reconstruction.
Resumo:
Increasing numbers of medical schools in Australia and overseas have moved away from didactic teaching methodologies and embraced problem-based learning (PBL) to improve clinical reasoning skills and communication skills as well as to encourage self-directed lifelong learning. In January 2005, the first cohort of students entered the new MBBS program at the Griffith University School of Medicine, Gold Coast, to embark upon an exciting, fully integrated curriculum using PBL, combining electronic delivery, communication and evaluation systems incorporating cognitive principles that underpin the PBL process. This chapter examines the educational philosophies and design of the e-learning environment underpinning the processes developed to deliver, monitor and evaluate the curriculum. Key initiatives taken to promote student engagement and innovative and distinctive approaches to student learning at Griffith promoted within the conceptual model for the curriculum are (a) Student engagement, (b) Pastoral care, (c) Staff engagement, (d) Monitoring and (e) Curriculum/Program Review. © 2007 Springer-Verlag Berlin Heidelberg.
Resumo:
In this study, the stability of anchored cantilever sheet pile wall in sandy soils is investigated using reliability analysis. Targeted stability is formulated as an optimization problem in the framework of an inverse first order reliability method. A sensitivity analysis is conducted to investigate the effect of parameters influencing the stability of sheet pile wall. Backfill soil properties, soil - steel pile interface friction angle, depth of the water table from the top of the sheet pile wall, total depth of embedment below the dredge line, yield strength of steel, section modulus of steel sheet pile, and anchor pull are all treated as random variables. The sheet pile wall system is modeled as a series of failure mode combination. Penetration depth, anchor pull, and section modulus are calculated for various target component and system reliability indices based on three limit states. These are: rotational failure about the position of the anchor rod, expressed in terms of moment ratio; sliding failure mode, expressed in terms of force ratio; and flexural failure of the steel sheet pile wall, expressed in terms of the section modulus ratio. An attempt is made to propose reliability based design charts considering the failure criteria as well as the variability in the parameters. The results of the study are compared with studies in the literature.
Resumo:
The stress concentration that occurs when load is diffused from a constant stress member into thin sheet is an important problem in the design of light weight structures. By using solutions in biharmonic polar-trigonometric series, the stress concentration can be effectively isolated so that highly accurate information necessary for design can be obtained. A method of analysis yielding high accuracy with limited effort is presented for rectangular panels with transverse edges free or supported by inextensional end ribs. Numerical data are given for panels with length twice the width.
Resumo:
It is well known that the numerical accuracy of a series solution to a boundary-value problem by the direct method depends on the technique of approximate satisfaction of the boundary conditions and on the stage of truncation of the series. On the other hand, it does not appear to be generally recognized that, when the boundary conditions can be described in alternative equivalent forms, the convergence of the solution is significantly affected by the actual form in which they are stated. The importance of the last aspect is studied for three different techniques of computing the deflections of simply supported regular polygonal plates under uniform pressure. It is also shown that it is sometimes possible to modify the technique of analysis to make the accuracy independent of the description of the boundary conditions.
Resumo:
It is shown that there is a strict one-to-one correspondence between results obtained by the use of "restricted" variational principles and those obtained by a moment method of the Mott-Smith type for shock structure.
Resumo:
The formal charge distribution and hence the electric moments of a number of halosilanes and their methyl derivatives have been calculated by the method of Image and Image . The difference between the observed and the calculated values in simple halosilanes is attributed to a change in the hybridization of the terminal halogen atom and in methyl halosilanes to the enhanced electron release of the methyl group towards silicon compared with carbon.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.