966 resultados para ORDER ACCURACY APPROXIMATIONS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper reports a study that investigated the relationship between students’ self-predicted and actual General Certificate of Secondary Education results in order to establish the extent of over- and under-prediction and whether this varies by subject and across genders and socio-economic groupings. It also considered the relationship between actual and predicted attainment and attitudes towards going to university. The sample consisted of 109 young people in two schools being followed up from an earlier study. Just over 50% of predictions were accurate and students were much more likely to over-predict than to under-predict. Most errors of prediction were only one grade out and may reflect examination unreliability as well as student misperceptions. Girls were slightly less likely than boys to over-predict but there were no differences associated with social background. Higher levels of attainment, both actual and predicted, were strongly associated with positive attitudes to university. Differences between predictions and results are likely to reflect examination errors as well as pupil errors. There is no evidence that students from more advantaged social backgrounds over-estimate themselves compared with other students, although boys over-estimate themselves compared with girls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behavior of the ensemble Kalman filter (EnKF) is examined in the context of a model that exhibits a nonlinear chaotic (slow) vortical mode coupled to a linear (fast) gravity wave of a given amplitude and frequency. It is shown that accurate recovery of both modes is enhanced when covariances between fast and slow normal-mode variables (which reflect the slaving relations inherent in balanced dynamics) are modeled correctly. More ensemble members are needed to recover the fast, linear gravity wave than the slow, vortical motion. Although the EnKF tends to diverge in the analysis of the gravity wave, the filter divergence is stable and does not lead to a great loss of accuracy. Consequently, provided the ensemble is large enough and observations are made that reflect both time scales, the EnKF is able to recover both time scales more accurately than optimal interpolation (OI), which uses a static error covariance matrix. For OI it is also found to be problematic to observe the state at a frequency that is a subharmonic of the gravity wave frequency, a problem that is in part overcome by the EnKF.However, error in themodeled gravity wave parameters can be detrimental to the performance of the EnKF and remove its implied advantages, suggesting that a modified algorithm or a method for accounting for model error is needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose and analyze a hybrid $hp$ boundary element method for the solution of problems of high frequency acoustic scattering by sound-soft convex polygons, in which the approximation space is enriched with oscillatory basis functions which efficiently capture the high frequency asymptotics of the solution. We demonstrate, both theoretically and via numerical examples, exponential convergence with respect to the order of the polynomials, moreover providing rigorous error estimates for our approximations to the solution and to the far field pattern, in which the dependence on the frequency of all constants is explicit. Importantly, these estimates prove that, to achieve any desired accuracy in the computation of these quantities, it is sufficient to increase the number of degrees of freedom in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3  s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider Discontinuous Galerkin approximations of two-phase, immiscible porous media flows in the global pressure/fractional flow formulation with capillary pressure. A sequential approach is used with a backward Euler step for the saturation equation, equal-order interpolation for the pressure and the saturation, and without any limiters. An accurate total velocity field is recovered from the global pressure equation to be used in the saturation equation. Numerical experiments show the advantages of the proposed reconstruction. To cite this article: A. Ern et al., C R. Acad. Sci. Paris, Ser. 1347 (2009). (C) 2009 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for large-scale systems. Nonetheless, a critical obstacle, which needs to be overcome in MPC, is the large computational burden when a large-scale system is considered or a long prediction horizon is involved. In order to solve this problem, we use an adaptive prediction accuracy (APA) approach that can reduce the computational burden almost by half. The proposed MPC scheme with this scheme is tested on the northern Dutch water system, which comprises Lake IJssel, Lake Marker, the River IJssel and the North Sea Canal. The simulation results show that by using the MPC-APA scheme, the computational time can be reduced to a large extent and a flood protection problem over longer prediction horizons can be well solved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neste trabalho é dado ênfase à inclusão das incertezas na avaliação do comportamento estrutural, objetivando uma melhor representação das características do sistema e uma quantificação do significado destas incertezas no projeto. São feitas comparações entre as técnicas clássicas existentes de análise de confiabilidade, tais como FORM, Simulação Direta Monte Carlo (MC) e Simulação Monte Carlo com Amostragem por Importância Adaptativa (MCIS), e os métodos aproximados da Superfície de Resposta( RS) e de Redes Neurais Artificiais(ANN). Quando possível, as comparações são feitas salientando- se as vantagens e inconvenientes do uso de uma ou de outra técnica em problemas com complexidades crescentes. São analisadas desde formulações com funções de estado limite explícitas até formulações implícitas com variabilidade espacial de carregamento e propriedades dos materiais, incluindo campos estocásticos. É tratado, em especial, o problema da análise da confiabilidade de estruturas de concreto armado incluindo o efeito da variabilidade espacial de suas propriedades. Para tanto é proposto um modelo de elementos finitos para a representação do concreto armado que incorpora as principais características observadas neste material. Também foi desenvolvido um modelo para a geração de campos estocásticos multidimensionais não Gaussianos para as propriedades do material e que é independente da malha de elementos finitos, assim como implementadas técnicas para aceleração das avaliações estruturais presentes em qualquer das técnicas empregadas. Para o tratamento da confiabilidade através da técnica da Superfície de Resposta, o algoritmo desenvolvido por Rajashekhar et al(1993) foi implementado. Já para o tratamento através de Redes Neurais Artificias, foram desenvolvidos alguns códigos para a simulação de redes percéptron multicamada e redes com função de base radial e então implementados no algoritmo de avaliação de confiabilidade desenvolvido por Shao et al(1997). Em geral, observou-se que as técnicas de simulação tem desempenho bastante baixo em problemas mais complexos, sobressaindo-se a técnica de primeira ordem FORM e as técnicas aproximadas da Superfície de Resposta e de Redes Neurais Artificiais, embora com precisão prejudicada devido às aproximações presentes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial companies in developing countries are facing rapid growths, and this requires having in place the best organizational processes to cope with the market demand. Sales forecasting, as a tool aligned with the general strategy of the company, needs to be as much accurate as possible, in order to achieve the sales targets by making available the right information for purchasing, planning and control of production areas, and finally attending in time and form the demand generated. The present dissertation uses a single case study from the subsidiary of an international explosives company based in Brazil, Maxam, experiencing high growth in sales, and therefore facing the challenge to adequate its structure and processes properly for the rapid growth expected. Diverse sales forecast techniques have been analyzed to compare the actual monthly sales forecast, based on the sales force representatives’ market knowledge, with forecasts based on the analysis of historical sales data. The dissertation findings show how the combination of both qualitative and quantitative forecasts, by the creation of a combined forecast that considers both client´s demand knowledge from the sales workforce with time series analysis, leads to the improvement on the accuracy of the company´s sales forecast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudo investiga o poder preditivo fora da amostra, um mês à frente, de um modelo baseado na regra de Taylor para previsão de taxas de câmbio. Revisamos trabalhos relevantes que concluem que modelos macroeconômicos podem explicar a taxa de câmbio de curto prazo. Também apresentamos estudos que são céticos em relação à capacidade de variáveis macroeconômicas preverem as variações cambiais. Para contribuir com o tema, este trabalho apresenta sua própria evidência através da implementação do modelo que demonstrou o melhor resultado preditivo descrito por Molodtsova e Papell (2009), o “symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant”. Para isso, utilizamos uma amostra de 14 moedas em relação ao dólar norte-americano que permitiu a geração de previsões mensais fora da amostra de janeiro de 2000 até março de 2014. Assim como o critério adotado por Galimberti e Moura (2012), focamos em países que adotaram o regime de câmbio flutuante e metas de inflação, porém escolhemos moedas de países desenvolvidos e em desenvolvimento. Os resultados da nossa pesquisa corroboram o estudo de Rogoff e Stavrakeva (2008), ao constatar que a conclusão da previsibilidade da taxa de câmbio depende do teste estatístico adotado, sendo necessária a adoção de testes robustos e rigorosos para adequada avaliação do modelo. Após constatar não ser possível afirmar que o modelo implementado provém previsões mais precisas do que as de um passeio aleatório, avaliamos se, pelo menos, o modelo é capaz de gerar previsões “racionais”, ou “consistentes”. Para isso, usamos o arcabouço teórico e instrumental definido e implementado por Cheung e Chinn (1998) e concluímos que as previsões oriundas do modelo de regra de Taylor são “inconsistentes”. Finalmente, realizamos testes de causalidade de Granger com o intuito de verificar se os valores defasados dos retornos previstos pelo modelo estrutural explicam os valores contemporâneos observados. Apuramos que o modelo fundamental é incapaz de antecipar os retornos realizados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After removal of the Selective Availability in 2000, the ionosphere became the dominant error source for Global Navigation Satellite Systems (GNSS), especially for the high-accuracy (cm-mm) demanding applications like the Precise Point Positioning (PPP) and Real Time Kinematic (RTK) positioning.The common practice of eliminating the ionospheric error, e. g. by the ionosphere free (IF) observable, which is a linear combination of observables on two frequencies such as GPS L1 and L2, accounts for about 99% of the total ionospheric effect, known as the first order ionospheric effect (Ion1). The remaining 1% residual range errors (RREs) in the IF observable are due to the higher - second and third, order ionospheric effects, Ion2 and Ion3, respectively. Both terms are related with the electron content along the signal path; moreover Ion2 term is associated with the influence of the geomagnetic field on the ionospheric refractive index and Ion3 with the ray bending effect of the ionosphere, which can cause significant deviation in the ray trajectory (due to strong electron density gradients in the ionosphere) such that the error contribution of Ion3 can exceed that of Ion2 (Kim and Tinin, 2007).The higher order error terms do not cancel out in the (first order) ionospherically corrected observable and as such, when not accounted for, they can degrade the accuracy of GNSS positioning, depending on the level of the solar activity and geomagnetic and ionospheric conditions (Hoque and Jakowski, 2007). Simulation results from early 1990s show that Ion2 and Ion3 would contribute to the ionospheric error budget by less than 1% of the Ion1 term at GPS frequencies (Datta-Barua et al., 2008). Although the IF observable may provide sufficient accuracy for most GNSS applications, Ion2 and Ion3 need to be considered for higher accuracy demanding applications especially at times of higher solar activity.This paper investigates the higher order ionospheric effects (Ion2 and Ion3, however excluding the ray bending effects associated with Ion3) in the European region in the GNSS positioning considering the precise point positioning (PPP) method. For this purpose observations from four European stations were considered. These observations were taken in four time intervals corresponding to various geophysical conditions: the active and quiet periods of the solar cycle, 2001 and 2006, respectively, excluding the effects of disturbances in the geomagnetic field (i.e. geomagnetic storms), as well as the years of 2001 and 2003, this time including the impact of geomagnetic disturbances. The program RINEX_HO (Marques et al., 2011) was used to calculate the magnitudes of Ion2 and Ion3 on the range measurements as well as the total electron content (TEC) observed on each receiver-satellite link. The program also corrects the GPS observation files for Ion2 and Ion3; thereafter it is possible to perform PPP with both the original and corrected GPS observation files to analyze the impact of the higher order ionospheric error terms excluding the ray bending effect which may become significant especially at low elevation angles (Ioannides and Strangeways, 2002) on the estimated station coordinates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main target here is to determine the orbit of an artificial satellite, using signals of the GPS constellation and least squares algorithms implemented through sequential Givens rotations as a method of estimation, with the aim of improving the performance of the orbit estimation process and, at the same time, minimizing the computational procedure cost. Geopotential perturbations up to high order and direct solar radiation pressure were taken into account. It was also considered the position of the GPS antenna on the satellite body that, lately, consists of the influence of the satellite attitude motion in the orbit determination process. An application has been done, using real data from the Topex/Poseidon satellite, whose ephemeris is available at Internet. The best accuracy obtained in position was smaller than 5 meters for short period (2 hours) and smaller than 28 meters for long period (24 hours) orbit determination. In both cases, the perturbations mentioned before were taken into consideration and the analysis occurred without selective availability on the signals measurements.