952 resultados para second-order accurate


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among the different possible amplification solutions offered by Raman scattering in optical fibers, ultra-long Raman lasers are particularly promising as they can provide quasi-losless second order amplification with reduced complexity, displaying excellent potential in the design of low-noise long-distance communication systems. Still, some of their advantages can be partially offset by the transfer of relative intensity noise from the pump sources and cavity-generated Stokes to the transmitted signal. In this paper we study the effect of ultra-long cavity design (length, pumping, grating reflectivity) on the transfer of RIN to the signal, demonstrating how the impact of noise can be greatly reduced by carefully choosing appropriate cavity parameters depending on the intended application of the system. © 2010 Copyright SPIE - The International Society for Optical Engineering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-frequency Eddy Current (EC) inspection with a transmit-receive probe (two horizontally offset coils) is used to monitor the Pressure Tube (PT) to Calandria Tube (CT) gap of CANDU® fuel channels. Accurate gap measurements are crucial to ensure fitness of service; however, variations in probe liftoff, PT electrical resistivity, and PT wall thickness can generate systematic measurement errors. Validated mathematical models of the EC probe are very useful for data interpretation, and may improve the gap measurement under inspection conditions where these parameters vary. As a first step, exact solutions for the electromagnetic response of a transmit-receive coil pair situated above two parallel plates separated by an air gap were developed. This model was validated against experimental data with flat-plate samples. Finite element method models revealed that this geometrical approximation could not accurately match experimental data with real tubes, so analytical solutions for the probe in a double-walled pipe (the CANDU® fuel channel geometry) were generated using the Second-Order Vector Potential (SOVP) formalism. All electromagnetic coupling coefficients arising from the probe, and the layered conductors were determined and substituted into Kirchhoff’s circuit equations for the calculation of the pickup coil signal. The flat-plate model was used as a basis for an Inverse Algorithm (IA) to simultaneously extract the relevant experimental parameters from EC data. The IA was validated over a large range of second layer plate resistivities (1.7 to 174 µΩ∙cm), plate wall thickness (~1 to 4.9 mm), probe liftoff (~2 mm to 8 mm), and plate-to plate gap (~0 mm to 13 mm). The IA achieved a relative error of less than 6% for the extracted FP resistivity and an accuracy of ±0.1 mm for the LO measurement. The IA was able to achieve a plate gap measurement with an accuracy of less than ±0.7 mm error over a ~2.4 mm to 7.5 mm probe liftoff and ±0.3 mm at nominal liftoff (2.42±0.05 mm), providing confidence in the general validity of the algorithm. This demonstrates the potential of using an analytical model to extract variable parameters that may affect the gap measurement accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding how biodiversity spatially distribute over both the short term and long term, and what factors are affecting the distribution, are critical for modeling the spatial pattern of biodiversity as well as for promoting effective conservation planning and practices. This dissertation aims to examine factors that influence short-term and long-term avian distribution from the geographical sciences perspective. The research develops landscape level habitat metrics to characterize forest height heterogeneity and examines their efficacies in modelling avian richness at the continental scale. Two types of novel vegetation-height-structured habitat metrics are created based on second order texture algorithms and the concepts of patch-based habitat metrics. I correlate the height-structured metrics with the richness of different forest guilds, and also examine their efficacies in multivariate richness models. The results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of two forest bird guilds. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. The second and the third projects focus on analyzing centroids of avian distributions, and testing hypotheses regarding the direction and speed of these shifts. I first showcase the usefulness of centroids analysis for characterizing the distribution changes of a few case study species. Applying the centroid method on 57 permanent resident bird species, I show that multi-directional distribution shifts occurred in large number of studied species. I also demonstrate, plain birds are not shifting their distribution faster than mountain birds, contrary to the prediction based on climate change velocity hypothesis. By modelling the abundance change rate at regional level, I show that extreme climate events and precipitation measures associate closely with some of the long-term distribution shifts. This dissertation improves our understanding on bird habitat characterization for species richness modelling, and expands our knowledge on how avian populations shifted their ranges in North America responding to changing environments in the past four decades. The results provide an important scientific foundation for more accurate predictive species distribution modeling in future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Under contact metamorphic conditions, carbonate rocks in the direct vicinity of the Adamello pluton reflect a temperature-induced grain coarsening. Despite this large-scale trend, a considerable grain size scatter occurs on the outcrop-scale indicating local influence of second-order effects such as thermal perturbations, fluid flow and second-phase particles. Second-phase particles, whose sizes range from nano- to the micron-scale, induce the most pronounced data scatter resulting in grain sizes too small by up to a factor of 10, compared with theoretical grain growth in a pure system. Such values are restricted to relatively impure samples consisting of up to 10 vol.% micron-scale second-phase particles, or to samples containing a large number of nano-scale particles. The obtained data set suggests that the second phases induce a temperature-controlled reduction on calcite grain growth. The mean calcite grain size can therefore be expressed in the form D 1⁄4 C2 eQ*/RT(dp/fp)m*, where C2 is a constant, Q* is an activation energy, T the temperature and m* the exponent of the ratio dp/fp, i.e. of the average size of the second phases divided by their volume fraction. However, more data are needed to obtain reliable values for C2 and Q*. Besides variations in the average grain size, the presence of second-phase particles generates crystal size distribution (CSD) shapes characterized by lognormal distributions, which differ from the Gaussian-type distributions of the pure samples. In contrast, fluid-enhanced grain growth does not change the shape of the CSDs, but due to enhanced transport properties, the average grain sizes increase by a factor of 2 and the variance of the distribution increases. Stable d18O and d13C isotope ratios in fluid-affected zones only deviate slightly from the host rock values, suggesting low fluid/rock ratios. Grain growth modelling indicates that the fluid-induced grain size variations can develop within several ka. As inferred from a combination of thermal and grain growth modelling, dykes with widths of up to 1 m have only a restricted influence on grain size deviations smaller than a factor of 1.1.To summarize, considerable grain size variations of up to one order of magnitude can locally result from second-order effects. Such effects require special attention when comparing experimentally derived grain growth kinetics with field studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a singularly perturbed ordinary differential equation with non-smooth data is considered. The numerical method is generated by means of a Petrov-Galerkin finite element method with the piecewise-exponential test function and the piecewise-linear trial function. At the discontinuous point of the coefficient, a special technique is used. The method is shown to be first-order accurate and singular perturbation parameter uniform convergence. Finally, numerical results are presented, which are in agreement with theoretical results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The solution of linear ordinary differential equations (ODEs) is commonly taught in first year undergraduate mathematics classrooms, but the understanding of the concept of a solution is not always grasped by students until much later. Recognising what it is to be a solution of a linear ODE and how to postulate such solutions, without resorting to tables of solutions, is an important skill for students to carry with them to advanced studies in mathematics. In this study we describe a teaching and learning strategy that replaces the traditional algorithmic, transmission presentation style for solving ODEs with a constructive, discovery based approach where students employ their existing skills as a framework for constructing the solutions of first and second order linear ODEs. We elaborate on how the strategy was implemented and discuss the resulting impact on a first year undergraduate class. Finally we propose further improvements to the strategy as well as suggesting other topics which could be taught in a similar manner.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop and test a theoretically-based integrative model of organizational innovation adoption. Confirmatory factor analyses using responses from 134 organizations showed that the hypothesized second-order model was a better fit to the data than the traditional model of independent factors. Furthermore, although not all elements were significant, the hypothesized model fit adoption better than the traditional model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Suggestions that peripheral imagery may affect the development of refractive error have led to interest in the variation in refraction and aberration across the visual field. It is shown that, if the optical system of the eye is rotationally symmetric about an optical axis which does not coincide with the visual axis, measurements of refraction and aberration made along the horizontal and vertical meridians of the visual field will show asymmetry about the visual axis. The departures from symmetry are modelled for second-order aberrations, refractive components and third-order coma. These theoretical results are compared with practical measurements from the literature. The experimental data support the concept that departures from symmetry about the visual axis in the measurements of crossed-cylinder astigmatism J45 and J180 are largely explicable in terms of a decentred optical axis. Measurements of the mean sphere M suggest, however, that the retinal curvature must differ in the horizontal and vertical meridians.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we consider the numerical solution of a fractional partial differential equation with Riesz space fractional derivatives (FPDE-RSFD) on a finite domain. Two types of FPDE-RSFD are considered: the Riesz fractional diffusion equation (RFDE) and the Riesz fractional advection–dispersion equation (RFADE). The RFDE is obtained from the standard diffusion equation by replacing the second-order space derivative with the Riesz fractional derivative of order αset membership, variant(1,2]. The RFADE is obtained from the standard advection–dispersion equation by replacing the first-order and second-order space derivatives with the Riesz fractional derivatives of order βset membership, variant(0,1) and of order αset membership, variant(1,2], respectively. Firstly, analytic solutions of both the RFDE and RFADE are derived. Secondly, three numerical methods are provided to deal with the Riesz space fractional derivatives, namely, the L1/L2-approximation method, the standard/shifted Grünwald method, and the matrix transform method (MTM). Thirdly, the RFDE and RFADE are transformed into a system of ordinary differential equations, which is then solved by the method of lines. Finally, numerical results are given, which demonstrate the effectiveness and convergence of the three numerical methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we consider a time-space fractional diffusion equation of distributed order (TSFDEDO). The TSFDEDO is obtained from the standard advection-dispersion equation by replacing the first-order time derivative by the Caputo fractional derivative of order α∈(0,1], the first-order and second-order space derivatives by the Riesz fractional derivatives of orders β 1∈(0,1) and β 2∈(1,2], respectively. We derive the fundamental solution for the TSFDEDO with an initial condition (TSFDEDO-IC). The fundamental solution can be interpreted as a spatial probability density function evolving in time. We also investigate a discrete random walk model based on an explicit finite difference approximation for the TSFDEDO-IC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common optometric problem is to specify the eye’s ocular aberrations in terms of Zernike coefficients and to reduce that specification to a prescription for the optimum sphero-cylindrical correcting lens. The typical approach is first to reconstruct wavefront phase errors from measurements of wavefront slopes obtained by a wavefront aberrometer. This paper applies a new method to this clinical problem that does not require wavefront reconstruction. Instead, we base our analysis of axial wavefront vergence as inferred directly from wavefront slopes. The result is a wavefront vergence map that is similar to the axial power maps in corneal topography and hence has a potential to be favoured by clinicians. We use our new set of orthogonal Zernike slope polynomials to systematically analyse details of the vergence map analogous to Zernike analysis of wavefront maps. The result is a vector of slope coefficients that describe fundamental aberration components. Three different methods for reducing slope coefficients to a spherocylindrical prescription in power vector forms are compared and contrasted. When the original wavefront contains only second order aberrations, the vergence map is a function of meridian only and the power vectors from all three methods are identical. The differences in the methods begin to appear as we include higher order aberrations, in which case the wavefront vergence map is more complicated. Finally, we discuss the advantages and limitations of vergence map representation of ocular aberrations.