184 resultados para Higher-order functions
Resumo:
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.
Resumo:
Summary form only given. Geometric simplicity, efficiency and polarization purity make slot antenna arrays ideal solutions for many radar, communications and navigation applications, especially when high power, light weight and limited scan volume are priorities. Resonant arrays of longitudinal slots have a slot spacing of one-half guide wavelength at the design frequency, so that the slots are located at the standing wave peaks. Planar arrays are implemented using a number of rectangular waveguides (branch line guides), arranged side-by-side, while waveguides main lines located behind and at right angles to the branch lines excite the radiating waveguides via centered-inclined coupling slots. Planar slotted waveguide arrays radiate broadside beams and all radiators are designed to be in phase.
Resumo:
Diabetic macular edema (DME) is one of the most common causes of visual loss among diabetes mellitus patients. Early detection and successive treatment may improve the visual acuity. DME is mainly graded into non-clinically significant macular edema (NCSME) and clinically significant macular edema according to the location of hard exudates in the macula region. DME can be identified by manual examination of fundus images. It is laborious and resource intensive. Hence, in this work, automated grading of DME is proposed using higher-order spectra (HOS) of Radon transform projections of the fundus images. We have used third-order cumulants and bispectrum magnitude, in this work, as features, and compared their performance. They can capture subtle changes in the fundus image. Spectral regression discriminant analysis (SRDA) reduces feature dimension, and minimum redundancy maximum relevance method is used to rank the significant SRDA components. Ranked features are fed to various supervised classifiers, viz. Naive Bayes, AdaBoost and support vector machine, to discriminate No DME, NCSME and clinically significant macular edema classes. The performance of our system is evaluated using the publicly available MESSIDOR dataset (300 images) and also verified with a local dataset (300 images). Our results show that HOS cumulants and bispectrum magnitude obtained an average accuracy of 95.56 and 94.39 % for MESSIDOR dataset and 95.93 and 93.33 % for local dataset, respectively.
Resumo:
Efficient and accurate geometric and material nonlinear analysis of the structures under ultimate loads is a backbone to the success of integrated analysis and design, performance-based design approach and progressive collapse analysis. This paper presents the advanced computational technique of a higher-order element formulation with the refined plastic hinge approach which can evaluate the concrete and steel-concrete structure prone to the nonlinear material effects (i.e. gradual yielding, full plasticity, strain-hardening effect when subjected to the interaction between axial and bending actions, and load redistribution) as well as the nonlinear geometric effects (i.e. second-order P-d effect and P-D effect, its associate strength and stiffness degradation). Further, this paper also presents the cross-section analysis useful to formulate the refined plastic hinge approach.
Resumo:
Background Wavefront-guided Laser-assisted in situ keratomileusis (LASIK) is a widespread and effective surgical treatment for myopia and astigmatic correction but whether it induces higher-order aberrations remains controversial. The study was designed to evaluate the changes in higher-order aberrations after wavefront-guided ablation with IntraLase femtosecond laser in moderate to high astigmatism. Methods Twenty-three eyes of 15 patients with moderate to high astigmatism (mean cylinder, −3.22 ± 0.59 dioptres) aged between 19 and 35 years (mean age, 25.6 ± 4.9 years) were included in this prospective study. Subjects with cylinder ≥ 1.5 and ≤2.75 D were classified as moderate astigmatism while high astigmatism was ≥3.00 D. All patients underwent a femtosecond laser–enabled (150-kHz IntraLase iFS; Abbott Medical Optics Inc) wavefront-guided ablation. Uncorrected (UDVA), corrected (CDVA) distance visual acuity in logMAR, keratometry, central corneal thickness (CCT) and higher-order aberrations (HOAs) over a 6 mm pupil, were assessed before and 6 months, postoperatively. The relationship between postoperative change in HOA and preoperative mean spherical equivalent refraction, mean astigmatism, and postoperative CCT were tested. Results At the last follow-up, the mean UDVA was increased (P < 0.0001) but CDVA remained unchanged (P = 0.48) and no eyes lost ≥2 lines of CDVA. Mean spherical equivalent refraction was reduced (P < 0.0001) and was within ±0.50 D range in 61 % of eyes. The average corneal curvature was flatter by 4 D and CCT was reduced by 83 μm (P < 0.0001, for all), postoperatively. Coma aberrations remained unchanged (P = 0.07) while the change in trefoil (P = 0.047) postoperatively, was not clinically significant. The 4th order HOAs (spherical aberration and secondary astigmatism) and the HOA root mean square (RMS) increased from −0.18 ± 0.07 μm, 0.04 ± 0.03 μm and 0.47 ± 0.11 μm, preoperatively, to 0.33 ± 0.19 μm (P = 0.004), 0.21 ± 0.09 μm (P < 0.0001) and 0.77 ± 0.27 μm (P < 0.0001), six months postoperatively. The change in spherical aberration after the procedure increased with an increase in the degree of preoperative myopia. Conclusions Wavefront-guided IntraLASIK offers a safe and effective option for vision and visual function improvement in astigmatism. Although, reduction of HOA is possible in a few eyes, spherical-like aberrations are increased in majority of the treated eyes.
Resumo:
While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
The finite element method in principle adaptively divides the continuous domain with complex geometry into discrete simple subdomain by using an approximate element function, and the continuous element loads are also converted into the nodal load by means of the traditional lumping and consistent load methods, which can standardise a plethora of element loads into a typical numerical procedure, but element load effect is restricted to the nodal solution. It in turn means the accurate continuous element solutions with the element load effects are merely restricted to element nodes discretely, and further limited to either displacement or force field depending on which type of approximate function is derived. On the other hand, the analytical stability functions can give the accurate continuous element solutions due to element loads. Unfortunately, the expressions of stability functions are very diverse and distinct when subjected to different element loads that deter the numerical routine for practical applications. To this end, this paper presents a displacement-based finite element function (generalised element load method) with a plethora of element load effects in the similar fashion that never be achieved by the stability function, as well as it can generate the continuous first- and second-order elastic displacement and force solutions along an element without loss of accuracy considerably as the analytical approach that never be achieved by neither the lumping nor consistent load methods. Hence, the salient and unique features of this paper (generalised element load method) embody its robustness, versatility and accuracy in continuous element solutions when subjected to the great diversity of transverse element loads.
Resumo:
The literature on critical thinking in higher education is constructed around the fundamental assumption that, while regarded as essential, is neither clearly or commonly understood. There is elsewhere evidence that academics and students have differing perceptions of what happens in university classrooms, particularly in regard to higher order thinking. This paper reports on a small-scale investigation in a Faculty of Education at an Australian University into academic and student definitions and understandings of critical thinking. Our particular interest lay in the consistencies and disconnections assumed to exist between academic staff and students. The presumption might therefore be that staff and students perceive critical thinking in different ways and that this may limit its achievement as a critical graduate attribute. The key finding from this study, contrary to extant findings, is that academics and students did share substantively similar definitions and understandings of critical thinking.
Resumo:
Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.
Resumo:
Statistics of the estimates of tricoherence are obtained analytically for nonlinear harmonic random processes with known true tricoherence. Expressions are presented for the bias, variance, and probability distributions of estimates of tricoherence as functions of the true tricoherence and the number of realizations averaged in the estimates. The expressions are applicable to arbitrary higher order coherence and arbitrary degree of interaction between modes. Theoretical results are compared with those obtained from numerical simulations of nonlinear harmonic random processes. Estimation of true values of tricoherence given observed values is also discussed
Resumo:
This paper formulates a node-based smoothed conforming point interpolation method (NS-CPIM) for solid mechanics. In the proposed NS-CPIM, the higher order conforming PIM shape functions (CPIM) have been constructed to produce a continuous and piecewise quadratic displacement field over the whole problem domain, whereby the smoothed strain field was obtained through smoothing operation over each smoothing domain associated with domain nodes. The smoothed Galerkin weak form was then developed to create the discretized system equations. Numerical studies have demonstrated the following good properties: NS-CPIM (1) can pass both standard and quadratic patch test; (2) provides an upper bound of strain energy; (3) avoid the volumetric locking; (4) provides the higher accuracy than those in the node-based smoothed schemes of the original PIMs.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
Finite element frame analysis programs targeted for design office application necessitate algorithms which can deliver reliable numerical convergence in a practical timeframe with comparable degrees of accuracy, and a highly desirable attribute is the use of a single element per member to reduce computational storage, as well as data preparation and the interpretation of the results. To this end, a higher-order finite element method including geometric non-linearity is addressed in the paper for the analysis of elastic frames for which a single element is used to model each member. The geometric non-linearity in the structure is handled using an updated Lagrangian formulation, which takes the effects of the large translations and rotations that occur at the joints into consideration by accumulating their nodal coordinates. Rigid body movements are eliminated from the local member load-displacement relationship for which the total secant stiffness is formulated for evaluating the large member deformations of an element. The influences of the axial force on the member stiffness and the changes in the member chord length are taken into account using a modified bowing function which is formulated in the total secant stiffness relationship, for which the coupling of the axial strain and flexural bowing is included. The accuracy and efficiency of the technique is verified by comparisons with a number of plane and spatial structures, whose structural response has been reported in independent studies.