27 resultados para Halley’s and Euler-Chebyshev’s Methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.
Resumo:
The problem of adjusting the weights (learning) in multilayer feedforward neural networks (NN) is known to be of a high importance when utilizing NN techniques in various practical applications. The learning procedure is to be performed as fast as possible and in a simple computational fashion, the two requirements which are usually not satisfied practically by the methods developed so far. Moreover, the presence of random inaccuracies are usually not taken into account. In view of these three issues, an alternative stochastic approximation approach discussed in the paper, seems to be very promising.
Resumo:
The effect of temperature on the degradation of blackcurrant anthocyanins in a model juice system was determined over a temperature range of 4–140 °C. The thermal degradation of anthocyanins followed pseudo first-order kinetics. From 4–100 °C an isothermal method was used to determine the kinetic parameters. In order to mimic the temperature profile in retort systems, a non-isothermal method was applied to determine the kinetic parameters in the model juice over the temperature range 110–140 °C. The results from both isothermal and non-isothermal methods fit well together, indicating that the non-isothermal procedure is a reliable mathematical method to determine the kinetics of anthocyanin degradation. The reaction rate constant (k) increased from 0.16 (±0.01) × 10−3 to 9.954 (±0.004) h−1 at 4 and 140 °C, respectively. The temperature dependence of the rate of anthocyanin degradation was modelled by an extension of the Arrhenius equation, which showed a linear increase in the activation energy with temperature.
Resumo:
A precipitation downscaling method is presented using precipitation from a general circulation model (GCM) as predictor. The method extends a previous method from monthly to daily temporal resolution. The simplest form of the method corrects for biases in wet-day frequency and intensity. A more sophisticated variant also takes account of flow-dependent biases in the GCM. The method is flexible and simple to implement. It is proposed here as a correction of GCM output for applications where sophisticated methods are not available, or as a benchmark for the evaluation of other downscaling methods. Applied to output from reanalyses (ECMWF, NCEP) in the region of the European Alps, the method is capable of reducing large biases in the precipitation frequency distribution, even for high quantiles. The two variants exhibit similar performances, but the ideal choice of method can depend on the GCM/reanalysis and it is recommended to test the methods in each case. Limitations of the method are found in small areas with unresolved topographic detail that influence higher-order statistics (e.g. high quantiles). When used as benchmark for three regional climate models (RCMs), the corrected reanalysis and the RCMs perform similarly in many regions, but the added value of the latter is evident for high quantiles in some small regions.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
We describe some recent advances in the numerical solution of acoustic scattering problems. A major focus of the paper is the efficient solution of high frequency scattering problems via hybrid numerical-asymptotic boundary element methods. We also make connections to the unified transform method due to A. S. Fokas and co-authors, analysing particular instances of this method, proposed by J. A. De-Santo and co-authors, for problems of acoustic scattering by diffraction gratings.
Resumo:
Capturing the sensory perception and preferences of older adults, whether healthy or with particular disease states, poses major methodological challenges for the sensory community. Currently a vastly under researched area, it is at the same time a vital area of research as alterations in sensory perception can affect daily dietary food choices, intake, health and wellbeing. Tailored sensory methods are needed that take into account the challenges of working with such populations including poor access leading to low patient numbers (study power), cognitive abilities, use of medications, clinical treatments and context (hospitals and care homes). The objective of this paper was to review current analytical and affective sensory methodologies used with different cohorts of healthy and frail older adults, with focus on food preference and liking. We particularly drew attention to studies concerning general ageing as well as to those considering age-related diseases that have an emphasis on malnutrition and weight loss. Pubmed and Web of Science databases were searched to 2014 for relevant articles in English. From this search 75 papers concerning sensory acuity, 41 regarding perceived intensity and 73 relating to hedonic measures were reviewed. Simpler testing methods, such as directional forced choice tests and paired preference tests need to be further explored to determine whether they lead to more reliable results and better inter-cohort comparisons. Finally, sensory quality and related quality of life for older adults suffering from dementia must be included and not ignored in our future actions.
Resumo:
Sea-ice concentrations in the Laptev Sea simulated by the coupled North Atlantic-Arctic Ocean-Sea-Ice Model and Finite Element Sea-Ice Ocean Model are evaluated using sea-ice concentrations from Advanced Microwave Scanning Radiometer-Earth Observing System satellite data and a polynya classification method for winter 2007/08. While developed to simulate largescale sea-ice conditions, both models are analysed here in terms of polynya simulation. The main modification of both models in this study is the implementation of a landfast-ice mask. Simulated sea-ice fields from different model runs are compared with emphasis placed on the impact of this prescribed landfast-ice mask. We demonstrate that sea-ice models are not able to simulate flaw polynyas realistically when used without fast-ice description. Our investigations indicate that without landfast ice and with coarse horizontal resolution the models overestimate the fraction of open water in the polynya. This is not because a realistic polynya appears but due to a larger-scale reduction of ice concentrations and smoothed ice-concentration fields. After implementation of a landfast-ice mask, the polynya location is realistically simulated but the total open-water area is still overestimated in most cases. The study shows that the fast-ice parameterization is essential for model improvements. However, further improvements are necessary in order to progress from the simulation of large-scale features in the Arctic towards a more detailed simulation of smaller-scaled features (here polynyas) in an Arctic shelf sea.
Resumo:
The prediction of climate variability and change requires the use of a range of simulation models. Multiple climate model simulations are needed to sample the inherent uncertainties in seasonal to centennial prediction. Because climate models are computationally expensive, there is a tradeoff between complexity, spatial resolution, simulation length, and ensemble size. The methods used to assess climate impacts are examined in the context of this trade-off. An emphasis on complexity allows simulation of coupled mechanisms, such as the carbon cycle and feedbacks between agricultural land management and climate. In addition to improving skill, greater spatial resolution increases relevance to regional planning. Greater ensemble size improves the sampling of probabilities. Research from major international projects is used to show the importance of synergistic research efforts. The primary climate impact examined is crop yield, although many of the issues discussed are relevant to hydrology and health modeling. Methods used to bridge the scale gap between climate and crop models are reviewed. Recent advances include large-area crop modeling, quantification of uncertainty in crop yield, and fully integrated crop–climate modeling. The implications of trends in computer power, including supercomputers, are also discussed.
Resumo:
Procedures for routine analysis of soil phosphorus (P) have been used for assessment of P status, distribution and P losses from cultivated mineral soils. No similar studies have been carried out on wetland peat soils. The objective was to compare extraction efficiency of ammonium lactate (PAL), sodium bicarbonate (P-Olsen), and double calcium lactate (P-DCaL) and P distribution in the soil profile of wetland peat soils. For this purpose, 34 samples of the 0-30, 30-60 and 60-90 cm layers were collected from peat soils in Germany, Israel, Poland, Slovenia, Sweden and the United Kingdom and analysed for P. Mean soil pH (CaCl2, 0.01 M) was 5.84, 5.51 and 5.47 in the 0-30, 30-60 and 60-90 cm layers, respectively. The P-DCaL was consistently about half the magnitude of either P-AL or P-Olsen. The efficiency of P extraction increased in the order P-DCaL < P-AL &LE; P-Olsen, with corresponding means (mg kg(-1)) for all soils (34 samples) of 15.32, 33.49 and 34.27 in 0-30 cm; 8.87, 17.30 and 21.46 in 30-60 cm; and 5.69, 14.00 and 21.40 in 60-90 cm. The means decreased with depth. When examining soils for each country separately, P-Olsen was relatively evenly distributed in the German, UK and Slovenian soils. P-Olsen was linearly correlated (r = 0.594, P = 0.0002) with pH, whereas the three P tests (except P-Olsen vs P-DCaL) significantly correlated with each other (P = 0.017850.0001). The strongest correlation (r = 0.617, P = 0.0001) was recorded for P-AL vs P-DCaL) and the two methods were inter-convertible using a regression equation: P-AL = -22.593 + 5.353 pH + 1.423 P-DCaL, R-2 = 0.550.